WPForms Blog

How to Write a Summary of Survey Results

How to Create a Survey Results Report (+7 Examples to Steal)

Claire Broadley

Content Manager

Jared Atchison

Do you need to write a survey results report?

A great report will increase the impact of your survey results and encourage more readers to engage with the content.

Create Your Survey Now

In This Article

1. Use Data Visualization

2. write the key facts first, 3. write a short survey summary, 4. explain the motivation for your survey, 5. put survey statistics in context, 6. tell the reader what the outcome should be, 7. export your survey results in other formats, bonus tip: export data for survey analysis, faqs on writing survey summaries, how to write a survey results report.

Let’s walk through some tricks and techniques with real examples.

The most important thing about a survey report is that it allows readers to make sense of data. Visualizations are a key component of any survey summary.

Examples of Survey Visualizations

Pie charts are perfect when you want to bring statistics to life. Here’s a great example from a wedding survey:

Example of a pie chart in a survey summary introduction

Pie charts can be simple and still get the message across. A well-designed chart will also add impact and reinforce the story you want to tell.

Here’s another great example from a homebuyer survey introduction:

Summary of survey results in a pie chart

If your survey is made up of open-ended questions, it might be more challenging to produce charts. If that’s the case, you can write up your findings instead. We’ll look at that next.

When you’re thinking about how to write a summary of survey results, remember that the introduction needs to get the reader’s attention.

Focusing on key facts helps you to do that right at the start.

This is why it’s usually best to write the survey introduction at the end once the rest of the survey report has been compiled. That way, you know what the big takeaways are.

This is an easy and powerful way to write a survey introduction that encourages the reader to investigate.

Examples of Survey Summaries With Key Facts

Here’s an awesome example of a survey summary that immediately draws the eye.

The key finding is presented first, and then we see a fact about half the group immediately after:

Survey summary with key facts

Using this order lets us see the impactful survey responses right up top.

If you need help deciding which questions to ask in your survey, check out this article on the best survey questions to include.

Your survey summary should give the reader a complete overview of the content. But you don’t want to take up too much space.

Survey summaries are sometimes called executive summaries because they’re designed to be quickly digested by decision-makers.

You’ll want to filter out the less important findings and focus on what matters. A 1-page summary is enough to get this information across. You might want to leave space for a table of contents on this page too.

Examples of Short Survey Introductions

One way to keep a survey summary short is to use a teaser at the start.

Here’s an example introduction that doesn’t state all of its findings but gives us the incentive to keep reading:

Survey summary report teaser

And here’s a great survey introduction that summarizes the findings in just one sentence:

Survey introduction with summary of findings

In WPForms, you can reduce the size of your survey report by excluding questions you don’t need. We decided to remove this question from the report PDF because it has no answers. Just click the arrow at the top, and it won’t appear in the final printout:

Exclude question from survey introduction report

This is a great way to quickly build a PDF summary of your survey that only includes the most important questions. You can also briefly explain your methodology.

When you create a survey in WordPress, you probably have a good idea of your reasons for doing so.

Make your purpose clear in the intro. For example, if you’re running a demographic survey , you might want to clarify that you’ll use this information to target your audience more effectively.

The reader must know exactly what you want to find out. Ideally, you should also explain why you wanted to create the survey in the first place. This can help you to reach the correct target audience for your survey.

Examples of Intros that Explain Motivation

This vehicle survey was carried out to help with future planning, so the introduction makes the purpose clear to the reader:

Explaining the motivation for a survey in survey results

Having focused questions can help to give your survey a clear purpose. We have some questionnaire examples and templates that can help with that.

Explaining why you ran the survey helps to give context, which we’ll talk about more next.

Including numbers in a survey summary is important. But your survey summary should tell a story too.

Adding numbers to your introduction will help draw the eye, but you’ll also want to explain what the numbers tell you.

Otherwise, you’ll have a list of statistics that don’t mean much to the reader.

Examples of Survey Statistics in Context

Here’s a great example of a survey introduction that uses the results from the survey to tell a story.

Survey summary introduction with context

Another way to put numbers in context is to present the results visually.

Here, WPForms has automatically created a table from our Likert Scale question that makes it easy to see a positive trend in the survey data:

WPForms survey summary results in a table

If you’d like to use a Likert scale to produce a chart like this, check out this article on the best Likert scale questions for survey forms .

Now that your survey report is done, you’ll likely want action to be taken based on your findings.

That’s why it’s a good idea to make a recommendation.

If you already explained your reasons for creating the survey, you can naturally add a few sentences on the outcomes you want to see.

Examples of Survey Introductions with Recommendations

Here’s a nice example of a survey introduction that clearly states the outcomes that the organization would like to happen now that the survey is published:

Survey introduction with recommendations

This helps to focus the reader on the content and helps them to understand why the survey is important. Respondents are more likely to give honest answers if they believe that a positive outcome will come from the survey.

You can also cite related research here to give your reasoning more weight.

You can easily create pie charts in the WPForms Surveys and Polls addon. It allows you to change the way your charts look without being overwhelmed by design options.

This handy feature will save tons of time when you’re composing your survey results.

Once you have your charts, exporting them allows you to use them in other ways. You may want to embed them in marketing materials like:

  • Presentation slides
  • Infographics
  • Press releases

WPForms makes it easy to export any graphic from your survey results so you can use it on your website or in slides.

Just use the dropdown to export your survey pie chart as a JPG or PDF:

Export survey pie chart

And that’s it! You now know how to create an impactful summary of survey results and add these to your marketing material or reports.

WPForms is the best form builder plugin for WordPress. As well as having the best survey tools, it also has the best data export options.

Often, you’ll want to export form entries to analyze them in other tools. You can do exactly the same thing with your survey data.

For example, you can:

  • Export your form entries or survey data to Excel
  • Automatically send survey responses to a Google Sheet

We really like the Google Sheets addon in WPForms because it sends your entries to a Google Sheet as soon as they’re submitted. And you can connect any form or survey to a Sheet without writing any code.

wpforms to google sheets

The Google Sheets integration is powerful enough to send all of your metrics. You can add columns to your Sheet and map the data points right from your WordPress form.

This is an ideal solution if you want to give someone else access to your survey data so they can crunch the numbers in spreadsheet format.

We’ll finish up with a few questions we’ve been asked about survey reporting.

What Is a Survey Report and What Should It Include?

A survey report compiles all data collected during a survey and presents it objectively. The report often summarizes pages of data from all responses received and makes it easier for the audience to process and digest.

How Do You Present Survey Results in an Impactful Way?

The best way to present survey results is to use visualizations. Charts, graphs, and infographics will make your survey outcomes easier to interpret.

For online surveys, WPForms has an awesome Surveys and Polls addon that makes it easy to publish many types of surveys and collect data using special survey fields:

  • Likert Scale (sometimes called a matrix question )
  • Net Promoter Score (sometimes called an NPS Survey)
  • Star Rating
  • Single Line Text
  • Multiple Choice (sometimes called radio buttons )

You can turn on survey reporting at any time, even if the form expiry date has passed.

To present your results, create a beautiful PDF by clicking Print Survey Report right from the WordPress dashboard:

Print survey results

Next Step: Make Your Survey Form

To create a great survey summary, you’ll want to start out with a great survey form. Check out this article on how to create a survey form online to learn how to create and customize your surveys in WordPress.

You can also:

  • Learn how to create a popup WordPress survey
  • Read some rating scale question examples
  • Get started easily with a customer survey template from the WPForms template library.

Ready to build your survey? Get started today with the easiest WordPress form builder plugin. WPForms Pro includes free survey form templates and offers a 14-day money-back guarantee.

If this article helped you out, please follow us on Facebook and Twitter for more free WordPress tutorials and guides.

Disclosure : Our content is reader-supported. This means if you click on some of our links, then we may earn a commission. See how WPForms is funded, why it matters, and how you can support us .

' src=

Claire Broadley

Claire is the Content Manager for the WPForms team. She has 13+ years' experience writing about WordPress and web hosting. Learn More

The Best WordPress Drag and Drop Form Builder Plugin

Easy, Fast, and Secure. Join over 6 million website owners who trust WPForms.

Popular on WPForms Right Now!

how-to-get-an-unlimited-free-trial-of-wpforms

How to Get an Unlimited Free Trial of WPForms (100% Free Forever)

Are you wondering how to get an unlimited WPForms trial for free?

You can use WPForms Lite without spending a penny. In this post, we’ll show you how to get an unlimited free WPForms trial and start building contact forms on your WordPress site right away.

what is a wpforms hidden fielld

What Is a WPForms Hidden Field? (Discover Hidden User Data)

Would you like to collect more data from the people who fill out your WordPress forms?

WPForms includes a Hidden field that lets you learn more about your users without showing them additional fields in your forms. In this article, we share our favorite tips and tricks for learning more information about your users.

14 comments on “ How to Create a Survey Results Report (+7 Examples to Steal) ”

This is really good

Hi Jocasta! Glad to hear that you enjoyed our article! Please check back often as we’re always adding new content as well as updating old ones!

Hi, I need to write an opinion poll report would you help with a sample I could use

Hi Thuku, I’m sorry but we don’t have any such examples available as it’s a bit outside our purview. A quick Google search does show some sites with information and examples regarding this though. I hope that helps!

With the Likert Scale what visualisation options are available? For example if there were 30 questions… I would like to be able to total up for all questions how many said never, or often… etc… and for each ‘x’ option for example if it was chocolate bars down the side and never through to often across the top… for each question… I would like to total for all questions for each chocolate bar… the totals of never through to often…? can you help?

Hey Nigel- to achieve what you’ve mentioned, I’d recommend you to make use of the Survey and Poll addon that has the ability to display the number of polls count. Here is a complete guide on this addon

If you’ve any questions, please get in touch with the support team and we’d be happy to assist you further!

Thanks, and have a good one 🙂

I am looking for someone to roll-up survey responses and prepare presentations/graphs. I have 58 responses. Does this company offer this as an option? If so, what are the cost?

Hi Ivory! I apologize for any misunderstanding, but we do not provide such services.

Hi! Can you make survey report.

Hi Umay! I apologize as I’m not entirely certain about your question, or what you’re looking to do. In case it helps though, our Survey and Polls addon does have some features to generate survey reports. You can find out more about that in this article .

I hope this helps to clarify 🙂 If you have any further questions about this, please contact us if you have an active subscription. If you do not, don’t hesitate to drop us some questions in our support forums .

Super helpful..

Hi Shaz! We’re glad to hear that you found this article helpful. Please check back often as we’re always adding new content and making updates to old ones 🙂

Hi , can you help meon how to present the questionnaire answer on my report writing

Hi Elida – Yes, we will be happy to help!

If you have a WPForms license, you have access to our email support, so please submit a support ticket . Otherwise, we provide limited complimentary support in the WPForms Lite WordPress.org support forum .

Add a Comment Cancel reply

We're glad you have chosen to leave a comment. Please keep in mind that all comments are moderated according to our privacy policy , and all links are nofollow. Do NOT use keywords in the name field. Let's have a personal and meaningful conversation.

Your Comment

Your Full Name

Your Email Address

Save my name, email, and website in this browser for the next time I comment.

This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Add Your Comment

  • Testimonials
  • FTC Disclosure
  • Online Form Builder
  • Conditional Logic
  • Conversational Forms
  • Form Landing Pages
  • Entry Management
  • Form Abandonment
  • Form Notifications
  • Form Templates
  • File Uploads
  • Calculation Forms
  • Geolocation Forms
  • Multi-Page Forms
  • Newsletter Forms
  • Payment Forms
  • Post Submissions
  • Signature Forms
  • Spam Protection
  • Surveys and Polls
  • User Registration
  • HubSpot Forms
  • Mailchimp Forms
  • Brevo Forms
  • Salesforce Forms
  • Authorize.Net
  • PayPal Forms
  • Square Forms
  • Stripe Forms
  • Documentation
  • Plans & Pricing
  • WordPress Hosting
  • Start a Blog
  • Make a Website
  • WordPress Forms for Nonprofits

quantilope logo

Survey Results: How To Analyze Data and Report on Findings

Best practices on how to effectively analyze survey results and report on findings, along with common analysis/reporting mistakes to avoid.

green background with black and white image of a chart coming out of a laptop

Apr 25, 2024

quantilope is the Consumer Intelligence Platform for all end-to-end research needs

In this blog, learn how to effectively analyze dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110320">survey data and report on findings that portray an dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110365">actionable insights story for key dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110354">stakeholders .

Table of Contents: 

How to analyze survey results.

  • How to present survey results 
  • How to write a survey report
  • Common mistakes in analyzing survey results
  • Best practices for presenting survey results

How quantilope streamlines the analysis and presentation of survey results  

Analyzing dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110318">survey results can feel overwhelming, with so many variables to dig into when looking to pull out the most actionable, interesting consumer stories. Below we’ll walk through how to make the most of your dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110320">survey data through a thorough yet efficient analysis process.

Review your top dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110324">survey questions

Begin your data analysis by identifying the key dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110324">survey questions in your dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110367">questionnaire that align with your broader market dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110338">research questions or business objectives. These are the questions that most closely relate to what you’re trying to achieve with your research project and the ones you should focus on the most. Other variables throughout your survey are important - but they may be better leveraged as cross-analysis variables (i.e. variables you filter down major questions by) rather than ones to be analyzed independently. Which brings us to our next step...

Analyze and cross-analyze your dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110335">quantitative data

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110335">Quantitative survey questions provide numerical information that can be statistically analyzed. Start by examining top-level numerical responses in your quantitative data (ratings, rankings, frequencies) for your most strategic dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110324">survey questions . Think about which variables might tell an even richer and more meaningful story when cut by dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110347">subgroups (i.e dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110337">cross-tabulation )- such as looking into buying behavior, cut by a dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110328">demographic variable (gender, age, etc). This deeper level of analysis uncovers insights from dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110342">survey dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110319">respondents that may not have been as apparent when examining survey variables in isolation. Take your time during this step to explore your data and identify interesting stories that you’ll eventually want to use in a final report. This is the fun part! At least us at quantilope think so...

Consider dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110346">statistical analysis

Next, run dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110346">statistical analysis on relevant questions. Traditional agencies typically require the help from a behavioral science/data processing team for this, but many automated platforms (like quantilope) can run dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110346">statistical analysis without any manual effort required.

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110339">Statistical significance testing provides an added layer of validity to your data, giving dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110354">stakeholders even more confidence in the recommendations you’re making. Knowing which dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110345">data points are significantly stronger/weaker than others confirms where you can have the most confidence in your data.

  Back to table of contents

How to present survey results

Data is a powerful tool, but it's only valuable if your audience can grasp its meaning. Visual representations of your dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110335">quantitative data can offer insights into patterns or trends that you may have missed when looking strictly at the numbers and they offer a clear, compelling way to present your findings to others.

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110336">Data visualization can sometimes be done while you’re analyzing and cross-analyzing your data (if using an automated platform like quantilope). Otherwise, this is the step in your insights process when you’ll take the findings you found during the analysis stage and give them life through intuitive charts and dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110321">graphs .

Below are a few steps to clearly visualize insights once you dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110352">collect data :

Choose your chart types:

The first step is to select the right chart type for your data based on the dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110369">type of question asked. No one chart fits all dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110351">types of data . Choose a chart that clearly displays each of your dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110345">data points ’ stories in the most appropriate way. Below are a few commonly used chart types in dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110370">market research :

Column/ dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110368">bar dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110321">graphs : Great for comparing categories.

Line charts: Show trends and changes over time compared to an initial dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110331">benchmark (great for a brand tracking survey ).

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110334">Pie charts : Used to display parts of a whole.

Scatter plots: Visualize the relationship between two variables (used in a Key Driver Analysis! ).

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110356">Word clouds : Good for concise dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110358">open-ended responses (i.e. brand names) to see which words appear biggest/smallest (representing the volume of feedback, respectively).

The right chart type will clearly display meaningful patterns and insights. quantilope’s platform makes it easy to toggle between different chart types and choose the one that best represents your data - significance testing already included!

Leverage numerical tables:

Sometimes, nothing beats the precision and detail of a well-structured numerical table. When you need to provide exact values or compare specific dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110345">data points , numerical tables are your go-to. When using numerical tables to present your findings, make sure they are:

Clear: Use explanatory headings and proper, consistent formatting.

Concise: Present only the essential data without unnecessary clutter.

How to write a dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110332">survey report  

Lastly, take your data analysis - complete with chart visualizations and dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110346">statistical analyses , and build a final report such as a slide report deck or an interactive dashboard.

This is where you’ll want to put your strategic thinking hat on to determine which charts, headlines, graphics, etc., are going to be most compelling/interesting to final dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110354">stakeholders and key decision makers; them buying into your data is not done purely on the data itself, rather how you organize and present it. 

Below are a few considerations when building and writing your final dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110332">survey report :

Start with dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110340">methodology :

Start by clearly describing how you designed and administered your survey to dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110319">respondents . Include details like:

Sampling methods: How were participants selected ( random, convenience, representative )

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110327">Sample size : How many people participated in your study?

Sampling timeframe: When did your study run?

Survey format: Where did you administer your survey? (online, phone, in-person, etc.)

Question types: dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110348">Multiple choice , dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110323">open-ended questions , dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110361">likert scales , and so on.

Advanced methods: Did you leverage any advanced dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110340">methodologies beyond standard usage and attitude questions such as NPS ( dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110341">net promoter score ) for dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110349">customer satisfaction or a segmentation for need-based dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110362">customer feedback ?

Your methodology background knowledge is helpful to those reading your report for added context and credibility. You can also use this section of your report to define any complex dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110340" style="background-color: transparent;">methodologies used in your study that might require added explanation to readers without a dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110370" style="background-color: transparent;">market research background.

Craft a story:

Don't make the mistake of throwing dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110345">data points at your audience. Part of reporting on your dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110359">online surveys includes crafting narratives that tie your data findings together to sell your story to your audience. What patterns emerge? Are there any surprises? Embed these stories into your charts through headlines and chart descriptions, and tie them back to your research objectives whenever possible. Think carefully about the following when crafting your data story:

The big takeaway: What's the core message you want to convey?

Context: Why does this story matter in the greater scheme of your business?

Implications: What business decisions or dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110354">stakeholder actions might come from these findings?

Organize your findings logically by themes or question categories, and include a summary/final takeaway at the end for readers who want a very quick and digestible understanding of your study. Your story is what dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110354">stakeholders and key decision makers look for in dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110370">market research - it’s your chance to impress them and ensure your data findings generate real impact.

Incorporate dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110333">infographics and other visual stimuli:

Aside from data charts, other visual stimuli add richness to your data presentation, making it more digestible and memorable. Consider these added visuals when presenting your data:

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110333">Infographics : dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110330">Summarize key findings with icons, charts, and text.

Images: Add relatable pictures that resonate with your data and/or audience.

Color: Use color strategically to emphasize crucial points or to emulate a brand’s look/feel.

dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110329">Qualitative data : Include insightful quotes or video responses (if applicable) to add additional stories, trends, or opinions to your report.

Common mistakes in analyzing dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110318">survey results  

Analyzing, presenting, and reporting on survey findings isn’t difficult when using the right tools and following the above best practices.

However, there are some things to keep in mind during these processes to avoid some common mistakes:

Avoid biased results in your final dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110326">survey analysis and presentation by controlling for things like sampling bias and reporting bias. Sampling bias occurs when you don’t use a truly representative sample of your target population; this can skew your results and portray inaccurate/misleadings findings. Reporting bias occurs when you don’t account for personal biases in what you choose to share (i.e. cherry picking the data that seems the most positive or that supports your personal pre-existing idea - often referred to as confirmation bias). Avoid survey biases by having a second (or even third) colleague review your work at each stage before sharing it with final dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110354">stakeholders .

Misinterpreting correlation as dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110343">causation

Just because two variables are related doesn't mean one causes the other. Be cautious about drawing causal conclusions without strong supporting evidence. The only real way to determine dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110343">causation is through a specialized dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110346">statistical analysis like regression analysis.

Looking into every dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110345">data point

Surveys produce a lot of really valuable information, but you need to focus your attention on the dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110360">metrics that generate impact for your research objective. It’s easy to get lost in an dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110325">excel data file or research platform when trying to look through every dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110322">survey response cut by as many variables as you can think of.

Start your analysis by strategically thinking about your research as a whole. What were you hoping to find out from your study? Start there. Once you start exploring your major dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110360">metrics , a story might naturally arise that leads you to further data cuts. Your data analysis should be comprehensive, yet efficient.

Best practices for presenting dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110318">survey results

While the above elements are things you’ll want to avoid in your research analysis, here are some a survey best practices you’ll want to keep in mind:

Know your audience

Tailor your report/presentation to your specific audience’s needs and understanding level. This might even mean creating different versions of your report that are geared toward different audiences. Some dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110354">stakeholders might be very technical and are looking for all the small details while others just want the bare minimum overview.

Keep it simple

Charts and dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110321">graphs should make data easier to understand, not more confusing. Avoid using too many chart types or overwhelming viewers with too much information. What are the charts that absolutely must be included to tell your full consumer story, and which are ‘nice to have’ if you had to pick and choose? Your final report doesn’t need to (and shouldn’t) house every possible dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110345">data point and data cut from your study. That’s what your raw data file is for - and you can always go back to reference this when needed. Your report however is the main takeaway and summary of your study; it should be concise and to the point. Provide enough information for your audience to understand how you reached your conclusions, but avoid burying them in irrelevant details. Any ‘extra’ data that you want to include but that doesn’t need to be front and center in your report can be included in an accompanying appendix.

Communicate clearly

Don't make your audience struggle to decode your visuals. Each chart should have a very clear takeaway that a reader of any skillset can digest almost instantly. More complex charts should have clear headlines or interpretation notes, written in simple language for your audience (non-technical or specialized terms).   Back to table of contents

How quantilope streamlines the analysis and presentation of dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110318">survey results  

quantilope’s automated Consumer Intelligence Platform saves clients from the tedious, manual processes of traditional dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110370">market research , offering an end-to-end resource for dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110367">questionnaire setup, real-time fielding, automated charting, and AI-assisted reporting.

From the start, work with your dedicated team of research consultants (or do it on your own through a DIY platform approach) to start building a dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110367">questionnaire with the simple drag and drop of U&A questions and advanced methods. Should you wish to streamline things even further, get a head start by leveraging a number of survey templates and customize as needed.

quantilope’s platform offers all dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110355">types of surveys - such as concept testing , ad effectiveness , and Better Brand Health Tracking to name a few. Available for use in these surveys is quantilope’s largest suite of automated advanced methods, making even the most complex dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110340">methodologies available to researchers of any background.

As soon as dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110319">respondents begin to complete your survey, monitor dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110344">response rates directly in the fielding tab - right at your fingertips. Get a jump start on dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110357">survey data dropdown#toggle" data-dropdown-menu-id-param="menu_term_292110357" data-dropdown-placement-param="top" data-term-id="292110357"> analysis as soon as you like, rather than waiting for fieldwork to close and to receive data files from a data processing team. Lean on quantilope’s AI co-pilot, quinn , to generate inspiration for chart headlines and report summaries/takeaways.

With quantilope, researchers have hands-on control of their dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110326">survey analysis and reporting processes, for the opportunity to make clear business recommendations based on dropdown#toggle" data-dropdown-placement-param="top" data-term-id="292110365">actionable insights .

Interested in learning more about quantilope’s Consumer Intelligence Platform? Get in touch below!

Get in touch to learn more about quantilope!

Latest articles.

Unleash Brand Growth With Our Better Brand Health Tracking Certification

Unleash Brand Growth With Our Better Brand Health Tracking Certification

This Certification Course from the quantilope Academy empowers researchers to learn how to effectively grow their brand through Better Bran...

green background with black and white image of woman on laptop wearing headphones

August 15, 2024

Strengthening Our Approach to Data Quality With Pre-Survey Defense

Strengthening Our Approach to Data Quality With Pre-Survey Defense

quantilope's Pre-Survey Defense module detects suspicious/fraudulent respondents before they enter a survey - complementing existing data q...

green background with black and white image of person typing on laptop

August 13, 2024

Better Brand Health Tracking in the Body Wash Category

Better Brand Health Tracking in the Body Wash Category

Dive into quantilope's Better Brand Health Tracking approach with this body wash category study, including brands like Dove, Nivea, Axe, an...

green background with black and white image of soapy loofa in shower washing arm

August 06, 2024

research paper survey results

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Happiness Hub Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • Happiness Hub
  • This Or That Game
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Finance and Business

How to Write a Survey Report

Last Updated: February 16, 2024 Approved

This article was reviewed by Anne Schmidt . Anne Schmidt is a Chemistry Instructor in Wisconsin. Anne has been teaching high school chemistry for over 20 years and is passionate about providing accessible and educational chemistry content. She has over 9,000 subscribers to her educational chemistry YouTube channel. She has presented at the American Association of Chemistry Teachers (AATC) and was an Adjunct General Chemistry Instructor at Northeast Wisconsin Technical College. Anne was published in the Journal of Chemical Education as a Co-Author, has an article in ChemEdX, and has presented twice and was published with the AACT. Anne has a BS in Chemistry from the University of Wisconsin, Oshkosh, and an MA in Secondary Education and Teaching from Viterbo University. wikiHow marks an article as reader-approved once it receives enough positive feedback. In this case, several readers have written to tell us that this article was helpful to them, earning it our reader-approved status. This article has been viewed 408,926 times.

Once you have finished conducting a survey, all that is left to do is write the survey report. A survey report describes a survey, its results, and any patterns or trends found in the survey. Most survey reports follow a standard organization, broken up under certain headings. Each section has a specific purpose. Fill out each section correctly and proofread the paper to create a polished and professional report.

Writing the Summary and Background Info

Step 1 Break the report up into separate sections with headings.

  • Table of Contents
  • Executive Summary
  • Background and Objectives
  • Methodology
  • Conclusion and Recommendations

Step 2 Write a 1-2...

  • Methodology of the survey.
  • Key results of the survey.
  • Conclusions drawn from the results of the survey.
  • Recommendations based on the results of the survey.

Step 3 State the objectives of the survey in the background section.

  • Study or target population: Who is being studied? Do they belong to a certain age group, cultural group, religion, political belief, or other common practice?
  • Variables of the study: What is the survey trying to study? Is the study looking for the association or relationship between two things?
  • Purpose of the study: How will this information be used? What new information can this survey help us realize?

Step 4 Provide background information by explaining similar research and studies.

  • Look for surveys done by researchers in peer-viewed academic journals. In addition to these, consult reports produced by similar companies, organizations, newspapers, or think tanks.
  • Compare their results to yours. Do your results support or conflict with their claims? What new information does your report provide on the matter?
  • Provide a description of the issue backed with peer-reviewed evidence. Define what it is you're trying to learn and explain why other studies haven't found this information.

Explaining the Method and Results

Step 1 Explain how the study was conducted in the methodology section.

  • Who did you ask? How can you define the gender, age, and other characteristics of these groups?
  • Did you do the survey over email, telephone, website, or 1-on-1 interviews?
  • Were participants randomly chosen or selected for a certain reason?
  • How large was the sample size? In other words, how many people answered the results of the survey?
  • Were participants offered anything in exchange for filling out the survey?

Step 2 Describe what type of questions were asked in the methodology section.

  • For example, you might sum up the general theme of your questions by saying, "Participants were asked to answer questions about their daily routine and dietary practices."
  • Don't put all of the questions in this section. Instead, include your questionnaire in the first appendix (Appendix A).

Step 3 Report the results of the survey in a separate section.

  • If your survey interviewed people, choose a few relevant responses and type them up in this section. Refer the reader to the full questionnaire, which will be in the appendix.
  • If your survey was broken up into multiple sections, report the results of each section separately, with a subheading for each section.
  • Avoid making any claims about the results in this section. Just report the data, using statistics, sample answers, and quantitative data.
  • Include graphs, charts, and other visual representations of your data in this section.

Step 4 Point out any interesting trends in the results section.

  • For example, do people from a similar age group response to a certain question in a similar way?
  • Look at questions that received the highest number of similar responses. This means that most people answer the question in similar ways. What do you think that means?

Analyzing Your Results

Step 1 State the implications of your survey at the beginning of the conclusion.

  • Here you may break away from the objective tone of the rest of the paper. You might state if readers should be alarmed, concerned, or intrigued by something.
  • For example, you might highlight how current policy is failing or state how the survey demonstrates that current practices are succeeding.

Step 2 Make recommendations about what needs to be done about this issue.

  • More research needs to be done on this topic.
  • Current guidelines or policy need to be changed.
  • The company or institution needs to take action.

Step 3 Include graphs, charts, surveys, and testimonies in the appendices.

  • Appendices are typically labeled with letters, such as Appendix A, Appendix B, Appendix C, and so on.
  • You may refer to appendices throughout your paper. For example, you can say, “Refer to Appendix A for the questionnaire” or “Participants were asked 20 questions (Appendix A)”.

Polishing Your Report

Step 1 Add a title page and table of contents to the first 2 pages.

  • The table of contents should list the page numbers for each section (or heading) of the report.

Step 2 Cite your research according to the style required for the survey report.

  • Typically, you will cite information using in-text parenthetical citations. Put the name of the author and other information, such as the page number or year of publication, in parentheses at the end of a sentence.
  • Some professional organizations may have their own separate guidelines. Consult these for more information.
  • If you don’t need a specific style, make sure that the formatting for the paper is consistent throughout. Use the same spacing, font, font size, and citations throughout the paper.

Step 3 Adopt a clear, objective voice throughout the paper.

  • Try not to editorialize the results as you report them. For example, don’t say, “The study shows an alarming trend of increasing drug use that must be stopped.” Instead, just say, “The results show an increase in drug use.”

Step 4 Write in concise, simple sentences.

  • If you have a choice between a simple word and a complex word, choose the simpler term. For example, instead of “1 out of 10 civilians testify to imbibing alcoholic drinks thrice daily,” just say “1 out of 10 people report drinking alcohol 3 times a day.”
  • Remove any unnecessary phrases or words. For example, instead of “In order to determine the frequency of the adoption of dogs,” just say “To determine the frequency of dog adoption.”

Step 5 Revise your paper thoroughly before submitting.

  • Make sure you have page numbers on the bottom of the page. Check that the table of contents contains the right page numbers.
  • Remember, spell check on word processors doesn’t always catch every mistake. Ask someone else to proofread for you to help you catch errors.

Survey Report Template

research paper survey results

Community Q&A

Community Answer

  • Always represent the data accurately in your report. Do not lie or misrepresent information. Thanks Helpful 0 Not Helpful 0

You Might Also Like

Skip Surveys

  • ↑ https://survey.umn.edu/best-practices/survey-analysis-reporting-your-findings
  • ↑ https://www.poynter.org/news/beware-sloppiness-when-reporting-surveys
  • ↑ https://ctb.ku.edu/en/table-of-contents/assessment/assessing-community-needs-and-resources/conduct-surveys/main

About This Article

Anne Schmidt

To write a survey report, you’ll need to include an executive summary, your background and objectives, the methodology, results, and a conclusion with recommendations. In the executive summary, write out the main points of your report in a brief 1-2 page explanation. After the summary, state the objective of the summary, or why the survey was conducted. You should also include the hypothesis and goals of the survey. Once you’ve written this, provide some background information, such as similar studies that have been conducted, that add to your research. Then, explain how your study was conducted in the methodology section. Make sure to include the size of your sample and what your survey contained. Finally, include the results of your study and what implications they present. To learn how to polish your report with a title page and table of contents, read on! Did this summary help you? Yes No

  • Send fan mail to authors

Reader Success Stories

Vickey Zhao

Vickey Zhao

Nov 21, 2021

Did this article help you?

Vickey Zhao

Fotima Mamatkulova

Jan 4, 2021

N. M.

Jul 15, 2019

Geraldine Robertson

Geraldine Robertson

Dec 4, 2018

Moniba Fatima

Moniba Fatima

Oct 1, 2019

Do I Have a Dirty Mind Quiz

Featured Articles

Use the Pfand System (Germany)

Trending Articles

Superhero Name Generator

Watch Articles

Wear a Headband

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Get all the best how-tos!

Sign up for wikiHow's weekly email newsletter

  • Research Process
  • Manuscript Preparation
  • Manuscript Review
  • Publication Process
  • Publication Recognition
  • Language Editing Services
  • Translation Services

Elsevier QRcode Wechat

How to Write the Results Section: Guide to Structure and Key Points

  • 4 minute read
  • 77.3K views

Table of Contents

The ‘ Results’ section of a research paper, like the ‘Introduction’ and other key parts, attracts significant attention from editors, reviewers, and readers. The reason lies in its critical role — that of revealing the key findings of a study and demonstrating how your research fills a knowledge gap in your field of study. Given its importance, crafting a clear and logically structured results section is essential.   

In this article, we will discuss the key elements of an effective results section and share strategies for making it concise and engaging. We hope this guide will help you quickly grasp ways of writing the results section, avoid common pitfalls, and make your writing process more efficient and effective.  

Structure of the results section  

Briefly restate the research topic in the introduction : Although the main purpose of the  results section  in a research paper is to list the notable findings of a study, it is customary to start with a brief repetition of the research question. This helps refocus the reader, allowing them to better appreciate the relevance of the findings. Additionally, restating the research question establishes a connection to the previous section of the paper, creating a smoother flow of information.  

Systematically present your research findings : Address the primary research question first, followed by the secondary research questions. If your research addresses multiple questions, mention the findings related to each one individually to ensure clarity and coherence.  

Represent your results visually: Graphs, tables, and other figures can help illustrate the findings of your paper, especially if there is a large amount of data in the results. As a rule of thumb, use a visual medium like a graph or a table if you wish to present three or more statistical values simultaneously.  

Graphical or tabular representations of data can also make your results section more visually appealing. Remember, an appealing and well-organized results section can help peer reviewers better understand the merits of your research, thereby increasing your chances of publication.  

Practical guidance for writing an effective ‘Results’ section   

  • Always use simple and plain language. Avoid the use of uncertain or unclear expressions.  
  • The findings of the study must be expressed in an objective and unbiased manner.  While it is acceptable to correlate certain findings , it is best to avoid over-interpreting the results. In addition, avoid using subjective or emotional words , such as “interestingly” or “unfortunately”, to describe the results as this may cause readers to doubt the objectivity of the paper.  
  • The content balances simplicity with comprehensiveness . For statistical data, simply describe the relevant tests and explain their results without mentioning raw data. If the study involves multiple hypotheses, describe the results for each one separately to avoid confusion and aid understanding. To enhance credibility, e nsure that negative results , if any, are included in this section, even if they do not support the research hypothesis.  
  • Wherever possible, use illustrations like tables, figures, charts, or other visual representations to highlight the results of your research paper. Mention these illustrations in the text, but do not repeat the information that they convey ¹ .  

Difference between data, results, and discussion sections  

Data ,  results,  and  discussion  sections all communicate the findings of a study, but each serves a distinct purpose with varying levels of interpretation.   

In the  results section , one cannot provide data without interpreting its relevance or make statements without citing data ² . In a sense, the  results section  does not draw connections between different data points. Therefore, there is a certain level of interpretation involved in drawing results out of data.

research paper survey results

(The example is intended to showcase how the visual elements and text in the results section complement each other ³ . The academic viewpoints included in the illustrative screenshots should not be used as references.)  

The discussion section allows authors even more interpretive freedom compared to the results section. Here, data and patterns within the data are compared with the findings from other studies to make more generalized points. Unlike the results section , which focuses purely on factual data, the discussion section touches upon hypothetical information, drawing conjectures and suggesting future directions for research.  

The ‘ Results’ section serves as the core of a research paper, capturing readers’ attention and providing insights into the study’s essence. Regardless of the subject of your research paper, a well-written results section can generate interest in your research. By following the tips outlined here, you can create a results section that effectively communicates your finding and invites further exploration. Remember, clarity is the key, and with the right approach, your results section can guide readers through the intricacies of your research.  

Professionals at Elsevier Language Services know the secret to writing a well-balanced results section. With their expert suggestions, you can ensure that your findings come across clearly to the reader. To maximize your chances of publication, reach out to Elsevier Language Services today !  

Type in wordcount for Standard Total: USD EUR JPY Follow this link if your manuscript is longer than 12,000 words. Upload

Reference  

  • Cetin, S., & Hackam, D. J. (2005). An approach to the writing of a scientific manuscript. Journal of Surgical Research, 128(2), 165–167. https://doi.org/10.1016/j.jss.2005.07.002  
  • Bahadoran, Z., Mirmiran, P., Zadeh-Vakili, A., Hosseinpanah, F., & Ghasemi, A. (2019). The Principles of Biomedical Scientific Writing: Results. International Journal of Endocrinology and Metabolism/International Journal of Endocrinology and Metabolism., In Press (In Press). https://doi.org/10.5812/ijem.92113  
  • Guo, J., Wang, J., Zhang, P., Wen, P., Zhang, S., Dong, X., & Dong, J. (2024). TRIM6 promotes glioma malignant progression by enhancing FOXO3A ubiquitination and degradation. Translational Oncology, 46, 101999. https://doi.org/10.1016/j.tranon.2024.101999  

Writing a good review article

Writing a good review article

Why is data validation important in research

Why is data validation important in research?

You may also like.

Academic paper format

Submission 101: What format should be used for academic papers?

Being Mindful of Tone and Structure in Artilces

Page-Turner Articles are More Than Just Good Arguments: Be Mindful of Tone and Structure!

How to Ensure Inclusivity in Your Scientific Writing

A Must-see for Researchers! How to Ensure Inclusivity in Your Scientific Writing

impactful introduction section

Make Hook, Line, and Sinker: The Art of Crafting Engaging Introductions

Limitations of a Research

Can Describing Study Limitations Improve the Quality of Your Paper?

Guide to Crafting Impactful Sentences

A Guide to Crafting Shorter, Impactful Sentences in Academic Writing

Write an Excellent Discussion in Your Manuscript

6 Steps to Write an Excellent Discussion in Your Manuscript

How to Write Clear Civil Engineering Papers

How to Write Clear and Crisp Civil Engineering Papers? Here are 5 Key Tips to Consider

Input your search keywords and press Enter.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • 7. The Results
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Applying Critical Thinking
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Qualitative Methods
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The results section is where you report the findings of your study based upon the methodology [or methodologies] you applied to gather information. The results section should state the findings of the research arranged in a logical sequence without bias or interpretation. A section describing results should be particularly detailed if your paper includes data generated from your own research.

Annesley, Thomas M. "Show Your Cards: The Results Section and the Poker Game." Clinical Chemistry 56 (July 2010): 1066-1070.

Importance of a Good Results Section

When formulating the results section, it's important to remember that the results of a study do not prove anything . Findings can only confirm or reject the hypothesis underpinning your study. However, the act of articulating the results helps you to understand the problem from within, to break it into pieces, and to view the research problem from various perspectives.

The page length of this section is set by the amount and types of data to be reported . Be concise. Use non-textual elements appropriately, such as figures and tables, to present findings more effectively. In deciding what data to describe in your results section, you must clearly distinguish information that would normally be included in a research paper from any raw data or other content that could be included as an appendix. In general, raw data that has not been summarized should not be included in the main text of your paper unless requested to do so by your professor.

Avoid providing data that is not critical to answering the research question . The background information you described in the introduction section should provide the reader with any additional context or explanation needed to understand the results. A good strategy is to always re-read the background section of your paper after you have written up your results to ensure that the reader has enough context to understand the results [and, later, how you interpreted the results in the discussion section of your paper that follows].

Bavdekar, Sandeep B. and Sneha Chandak. "Results: Unraveling the Findings." Journal of the Association of Physicians of India 63 (September 2015): 44-46; Brett, Paul. "A Genre Analysis of the Results Section of Sociology Articles." English for Specific Speakers 13 (1994): 47-59; Go to English for Specific Purposes on ScienceDirect;Burton, Neil et al. Doing Your Education Research Project . Los Angeles, CA: SAGE, 2008; Results. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Kretchmer, Paul. Twelve Steps to Writing an Effective Results Section. San Francisco Edit; "Reporting Findings." In Making Sense of Social Research Malcolm Williams, editor. (London;: SAGE Publications, 2003) pp. 188-207.

Structure and Writing Style

I.  Organization and Approach

For most research papers in the social and behavioral sciences, there are two possible ways of organizing the results . Both approaches are appropriate in how you report your findings, but use only one approach.

  • Present a synopsis of the results followed by an explanation of key findings . This approach can be used to highlight important findings. For example, you may have noticed an unusual correlation between two variables during the analysis of your findings. It is appropriate to highlight this finding in the results section. However, speculating as to why this correlation exists and offering a hypothesis about what may be happening belongs in the discussion section of your paper.
  • Present a result and then explain it, before presenting the next result then explaining it, and so on, then end with an overall synopsis . This is the preferred approach if you have multiple results of equal significance. It is more common in longer papers because it helps the reader to better understand each finding. In this model, it is helpful to provide a brief conclusion that ties each of the findings together and provides a narrative bridge to the discussion section of the your paper.

NOTE:   Just as the literature review should be arranged under conceptual categories rather than systematically describing each source, you should also organize your findings under key themes related to addressing the research problem. This can be done under either format noted above [i.e., a thorough explanation of the key results or a sequential, thematic description and explanation of each finding].

II.  Content

In general, the content of your results section should include the following:

  • Introductory context for understanding the results by restating the research problem underpinning your study . This is useful in re-orientating the reader's focus back to the research problem after having read a review of the literature and your explanation of the methods used for gathering and analyzing information.
  • Inclusion of non-textual elements, such as, figures, charts, photos, maps, tables, etc. to further illustrate key findings, if appropriate . Rather than relying entirely on descriptive text, consider how your findings can be presented visually. This is a helpful way of condensing a lot of data into one place that can then be referred to in the text. Consider referring to appendices if there is a lot of non-textual elements.
  • A systematic description of your results, highlighting for the reader observations that are most relevant to the topic under investigation . Not all results that emerge from the methodology used to gather information may be related to answering the " So What? " question. Do not confuse observations with interpretations; observations in this context refers to highlighting important findings you discovered through a process of reviewing prior literature and gathering data.
  • The page length of your results section is guided by the amount and types of data to be reported . However, focus on findings that are important and related to addressing the research problem. It is not uncommon to have unanticipated results that are not relevant to answering the research question. This is not to say that you don't acknowledge tangential findings and, in fact, can be referred to as areas for further research in the conclusion of your paper. However, spending time in the results section describing tangential findings clutters your overall results section and distracts the reader.
  • A short paragraph that concludes the results section by synthesizing the key findings of the study . Highlight the most important findings you want readers to remember as they transition into the discussion section. This is particularly important if, for example, there are many results to report, the findings are complicated or unanticipated, or they are impactful or actionable in some way [i.e., able to be pursued in a feasible way applied to practice].

NOTE:   Always use the past tense when referring to your study's findings. Reference to findings should always be described as having already happened because the method used to gather the information has been completed.

III.  Problems to Avoid

When writing the results section, avoid doing the following :

  • Discussing or interpreting your results . Save this for the discussion section of your paper, although where appropriate, you should compare or contrast specific results to those found in other studies [e.g., "Similar to the work of Smith [1990], one of the findings of this study is the strong correlation between motivation and academic achievement...."].
  • Reporting background information or attempting to explain your findings. This should have been done in your introduction section, but don't panic! Often the results of a study point to the need for additional background information or to explain the topic further, so don't think you did something wrong. Writing up research is rarely a linear process. Always revise your introduction as needed.
  • Ignoring negative results . A negative result generally refers to a finding that does not support the underlying assumptions of your study. Do not ignore them. Document these findings and then state in your discussion section why you believe a negative result emerged from your study. Note that negative results, and how you handle them, can give you an opportunity to write a more engaging discussion section, therefore, don't be hesitant to highlight them.
  • Including raw data or intermediate calculations . Ask your professor if you need to include any raw data generated by your study, such as transcripts from interviews or data files. If raw data is to be included, place it in an appendix or set of appendices that are referred to in the text.
  • Be as factual and concise as possible in reporting your findings . Do not use phrases that are vague or non-specific, such as, "appeared to be greater than other variables..." or "demonstrates promising trends that...." Subjective modifiers should be explained in the discussion section of the paper [i.e., why did one variable appear greater? Or, how does the finding demonstrate a promising trend?].
  • Presenting the same data or repeating the same information more than once . If you want to highlight a particular finding, it is appropriate to do so in the results section. However, you should emphasize its significance in relation to addressing the research problem in the discussion section. Do not repeat it in your results section because you can do that in the conclusion of your paper.
  • Confusing figures with tables . Be sure to properly label any non-textual elements in your paper. Don't call a chart an illustration or a figure a table. If you are not sure, go here .

Annesley, Thomas M. "Show Your Cards: The Results Section and the Poker Game." Clinical Chemistry 56 (July 2010): 1066-1070; Bavdekar, Sandeep B. and Sneha Chandak. "Results: Unraveling the Findings." Journal of the Association of Physicians of India 63 (September 2015): 44-46; Burton, Neil et al. Doing Your Education Research Project . Los Angeles, CA: SAGE, 2008;  Caprette, David R. Writing Research Papers. Experimental Biosciences Resources. Rice University; Hancock, Dawson R. and Bob Algozzine. Doing Case Study Research: A Practical Guide for Beginning Researchers . 2nd ed. New York: Teachers College Press, 2011; Introduction to Nursing Research: Reporting Research Findings. Nursing Research: Open Access Nursing Research and Review Articles. (January 4, 2012); Kretchmer, Paul. Twelve Steps to Writing an Effective Results Section. San Francisco Edit ; Ng, K. H. and W. C. Peh. "Writing the Results." Singapore Medical Journal 49 (2008): 967-968; Reporting Research Findings. Wilder Research, in partnership with the Minnesota Department of Human Services. (February 2009); Results. The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Schafer, Mickey S. Writing the Results. Thesis Writing in the Sciences. Course Syllabus. University of Florida.

Writing Tip

Why Don't I Just Combine the Results Section with the Discussion Section?

It's not unusual to find articles in scholarly social science journals where the author(s) have combined a description of the findings with a discussion about their significance and implications. You could do this. However, if you are inexperienced writing research papers, consider creating two distinct sections for each section in your paper as a way to better organize your thoughts and, by extension, your paper. Think of the results section as the place where you report what your study found; think of the discussion section as the place where you interpret the information and answer the "So What?" question. As you become more skilled writing research papers, you can consider melding the results of your study with a discussion of its implications.

Driscoll, Dana Lynn and Aleksandra Kasztalska. Writing the Experimental Report: Methods, Results, and Discussion. The Writing Lab and The OWL. Purdue University.

  • << Previous: Insiderness
  • Next: Using Non-Textual Elements >>
  • Last Updated: Aug 30, 2024 10:02 AM
  • URL: https://libguides.usc.edu/writingguide

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Korean Med Sci
  • v.35(45); 2020 Nov 23

Logo of jkms

Reporting Survey Based Studies – a Primer for Authors

Prithvi sanjeevkumar gaur.

1 Smt. Kashibai Navale Medical College and General Hospital, Pune, India.

Olena Zimba

2 Department of Internal Medicine No. 2, Danylo Halytsky Lviv National Medical University, Lviv, Ukraine.

Vikas Agarwal

3 Department Clinical Immunology and Rheumatology, Sanjay Gandhi Postgraduate Institute of Medical Sciences, Lucknow, India.

Latika Gupta

Associated data.

The coronavirus disease 2019 (COVID-19) pandemic has led to a massive rise in survey-based research. The paucity of perspicuous guidelines for conducting surveys may pose a challenge to the conduct of ethical, valid and meticulous research. The aim of this paper is to guide authors aiming to publish in scholarly journals regarding the methods and means to carry out surveys for valid outcomes. The paper outlines the various aspects, from planning, execution and dissemination of surveys followed by the data analysis and choosing target journals. While providing a comprehensive understanding of the scenarios most conducive to carrying out a survey, the role of ethical approval, survey validation and pilot testing, this brief delves deeper into the survey designs, methods of dissemination, the ways to secure and maintain data anonymity, the various analytical approaches, the reporting techniques and the process of choosing the appropriate journal. Further, the authors analyze retracted survey-based studies and the reasons for the same. This review article intends to guide authors to improve the quality of survey-based research by describing the essential tools and means to do the same with the hope to improve the utility of such studies.

Graphical Abstract

An external file that holds a picture, illustration, etc.
Object name is jkms-35-e398-abf001.jpg

INTRODUCTION

Surveys are the principal method used to address topics that require individual self-report about beliefs, knowledge, attitudes, opinions or satisfaction, which cannot be assessed using other approaches. 1 This research method allows information to be collected by asking a set of questions on a specific topic to a subset of people and generalizing the results to a larger population. Assessment of opinions in a valid and reliable way require clear, structured and precise reporting of results. This is possible with a survey based out of a meticulous design, followed by validation and pilot testing. 2 The aim of this opinion piece is to provide practical advice to conduct survey-based research. It details the ethical and methodological aspects to be undertaken while performing a survey, the online platforms available for distributing survey, and the implications of survey-based research.

Survey-based research is a means to obtain quick data, and such studies are relatively easy to conduct and analyse, and are cost-effective (under a majority of the circumstances). 3 These are also one of the most convenient methods of obtaining data about rare diseases. 4 With major technological advancements and improved global interconnectivity, especially during the coronavirus disease 2019 (COVID-19) pandemic, surveys have surpassed other means of research due to their distinctive advantage of a wider reach, including respondents from various parts of the world having diverse cultures and geographically disparate locations. Moreover, survey-based research allows flexibility to the investigator and respondent alike. 5 While the investigator(s) may tailor the survey dates and duration as per their availability, the respondents are allowed the convenience of responding to the survey at ease, in the comfort of their homes, and at a time when they can answer the questions with greater focus and to the best of their abilities. 6 Respondent biases inherent to environmental stressors can be significantly reduced by this approach. 5 It also allows responses across time-zones, which may be a major impediment to other forms of research or data-collection. This allows distant placement of the investigator from the respondents.

Various digital tools are now available for designing surveys ( Table 1 ). 7 Most of these are free with separate premium paid options. The analysis of data can be made simpler and cleaning process almost obsolete by minimising open-ended answer choices. 8 Close-ended answers makes data collection and analysis efficient, by generating an excel which can be directly accessed and analysed. 9 Minimizing the number of questions and making all questions mandatory can further aid this process by bringing uniformity to the responses and analysis simpler. Surveys are arguably also the most engaging form of research, conditional to the skill of the investigator.

Serial No.Survey toolFeatures
FreePaid
1SoGoSurveyPre-defined templates, multilingual surveys, skip logic, questions and answer bank, progress bar, add comments, import answers, embed multimedia, print surveys.Advanced reporting and analysis, pre-fill known data into visible and hidden field, automatic scoring, display custom messages based on quiz scores.
2Typeform3 Typeforms, 10 Q/t, 100 A/m, templates, reports and metrics, embed typeform in a webpage, download data.10,000 A/m, unlimited logic jumps, remove typeform branding, payment fields, scoring and pricing calculator, send follow up emails.
3Zoho SurveyUnlimited surveys,10 Q/s, 100 A/s, in-mail surveys, templates, embed in website, scoring, HTTPS encryption, social media promotion, password protection, 1 response collector, Survey builder in 26 languages.Unlimited questions and respondents and response collectors, question randomization, Zoho CRM, Eventbrite, Slack, Google sheets, Shopify and Zendesk integration, Sentiment analysis, Piping logic, White label survey, Upload favicon, Tableau integration.
4YesinsightsNA25,000 A/m, NPS surveys, Website Widget, Unlimited surveys and responses.
5Survey PlanetUnlimited surveys, questions and responses, two survey player types, share surveys on social media and emails, SSL security, no data-mining or information selling, embed data, pre-written surveys, basic themes, surveys in 20 languages, basic in-app reports.Export results, custom themes, question branching and images with custom formatting, alternative success URL redirect, white label and kiosk surveys, e-mail survey completion notifications four chart types for results.
6Survey Gizmo3 surveys, unlimited Q/s, 100 A, raw data exports, share reports via URL, various question and answer options, progress bar and share on social media options.Advanced reports (profile, longitudinal), logic and piping, A/B split testing, disqualifications, file uploads, API access, webpage redirects, conjoint analysis, crosstab reports, TURF reports, open-text analysis, data-cleaning tool.
7SurveyMonkey10 questions, 100 respondents, 15 question types, light theme customization and templates.Unlimited, multilingual questions and surveys, fine control systems, analyse, filter and export results, shared asset library, customised logos, colours and URLs.
8SurveyLegend3 surveys, 6 pictures, unlimited responses, real time analytics, no data export, 1 conditional logic, Ads and watermarked, top notch security and encryption, collect on any device.Unlimited surveys, responses, pictures, unlimited conditional logic, white label, share real time results, enable data export, 100K API calls and 10GB storage.
9Google formsUnlimited surveys and respondents, data collection in google spreadsheets, themes, custom logo, add images or videos, skip logic and page branching, embed survey into emails or website, add collaborators.NA
10Client HeartbeatNAUnlimited Surveys, 50 + Users, 10,000 + Contacts, 10 Sub-Accounts, CRM syncing/API access, Company branding, Concierge Support.

Q/t = questions per typeform, A/m = answers per month, Q/s = questions per survey, A/s = answers per survey, NA = not applicable, NPS = net promoter score.

Data protection laws now mandate anonymity while collecting data for most surveys, particularly when they are exempt from ethical review. 10 , 11 Anonymization has the potential to reduce (or at times even eliminate) social desirability bias which gains particular relevance when targeting responses from socially isolated or vulnerable communities (e.g. LGBTQ and low socio-economic strata communities) or minority groups (religious, ethnic and medical) or controversial topics (drug abuse, using language editing software).

Moreover, surveys could be the primary methodology to explore a hypothesis until it evolves into a more sophisticated and partly validated idea after which it can be probed further in a systematic and structured manner using other research methods.

The aim of this paper is to reduce the incorrect reporting of surveys. The paper also intends to inform researchers of the various aspects of survey-based studies and the multiple points that need to be taken under consideration while conducting survey-based research.

SURVEYS IN THE COVID-19 PANDEMIC

The COVID-19 has led to a distinctive rise in survey-based research. 12 The need to socially distance amid widespread lockdowns reduced patient visits to the hospital and brought most other forms of research to a standstill in the early pandemic period. A large number of level-3 bio-safety laboratories are being engaged for research pertaining to COVID-19, thereby limiting the options to conduct laboratory-based research. 13 , 14 Therefore, surveys appear to be the most viable option for researchers to explore hypotheses related to the situation and its impact in such times. 15

LIMITATIONS WHILE CONDUCTING SURVEY-BASED RESEARCH

Designing a fine survey is an arduous task and requires skill even though clear guidelines are available in regard to the same. Survey design requires extensive thoughtfulness on the core questions (based on the hypothesis or the primary research question), with consideration of all possible answers, and the inclusion of open-ended options to allow recording other possibilities. A survey should be robust, in regard to the questions gathered and the answer choices available, it must be validated, and pilot tested. 16 The survey design may be supplanted with answer choices tailored for the convenience of the responder, to reduce the effort while making it more engaging. Survey dissemination and engagement of respondents also requires experience and skill. 17

Furthermore, the absence of an interviewer prevents us from gaining clarification on responses of open-ended questions if any. Internet surveys are also prone to survey fraud by erroneous reporting. Hence, anonymity of surveys is a boon and a bane. The sample sizes are skewed as it lacks representation of population absent on the Internet like the senile or the underprivileged. The illiterate population also lacks representation in survey-based research.

The “Enhancing the QUAlity and Transparency Of health Research” network (EQUATOR) provides two separate guidelines replete with checklists to ensure valid reporting of e-survey methodology. These include “The Checklist for Reporting Results of Internet E-Surveys” (CHERRIES) statement and “ The Journal of Medical Internet Research ” (JMIR) checklist.

COMMON TYPES OF SURVEY-BASED RESEARCH

From a clinician's standpoint, the common survey types include those centered around problems faced by the patients or physicians. 18 Surveys collecting the opinions of various clinicians on a debated clinical topic or feedback forms typically served after attending medical conferences or prescribing a new drug or trying a new method for a given procedure are also surveys. The formulation of clinical practice guidelines entails Delphi exercises using paper surveys, which are yet another form of survey-mediated research.

Size of the survey depends on its intent. They could be large or small surveys. Therefore, identification of the intent behind the survey is essential to allow the investigator to form a hypothesis and then explore it further. Large population-based or provider-based surveys are often done and generate mammoth data over the years. E.g. The National Health and Nutrition Examination Survey, The National Health Interview Survey and the National Ambulatory Medical Care Survey.

SCENARIOS FOR CONDUCTING SURVEY-BASED RESEARCH

Despite all said and done about the convenience of conducting survey-based research, it is prudent to conduct a feasibility check before embarking on one. Certain scenarios may be the key determinants in determining the fate of survey-based research ( Table 2 ).

Unsuitable scenariosSuitable scenarios
Respondent relatedRespondent related
1. Avid Internet users are ideal target demographics.
2. Email database makes reminders convenient.
3. Enthusiastic target demographics nullifies need of incentives.
4. Supports a larger sample size.
5. Non-respondents and respondents must be matched.
1. Under-represented on the internet can't be included.
2. Population with privacy concerns like transgenders, sex workers or rape survivors need to be promised anonymity.
3. People lacking motivation and enthusiasm, require coaxing and convincing by the physician or incentives as a last resort.
4. Illiterate population unable to read and comprehend the questions asked.
Investigator relatedInvestigator related
1. Adequate budget for survey dissemination.
2. Well-versed with handling all software required for the survey.
3. Able to monitor IP address and cookies to avoid multiple responses.
4. Surveys undergo pilot testing, validation testing and reliability testing.
5. Allowing data entry without data editing.
1. The investigator is a novice at or inexperienced with web-based tools.
Survey relatedSurvey related
1. Engaging and interactive using the various tools.
2. Fast evolving content in repeated succession to keep the respondent alert. E.g. - Delphi surveys.
3. Suitable to record rare, strange events that later help to develop a hypothesis.
1. Need of accurate and precise data or observational data.
2. An existing study has already validated key observations (door-to-door study has already been conducted).
3. Qualitative data is being studied.

ETHICS APPROVAL FOR SURVEY-BASED RESEARCH

Approval from the Institutional Review Board should be taken as per requirement according to the CHERRIES checklist. However, rules for approval are different as per the country or nation and therefore, local rules must be checked and followed. For instance, in India, the Indian Council of Medical Research released an article in 2017, stating that the concept of broad consent has been updated which is defined “consent for an unspecified range of future research subject to a few contents and/or process restrictions.” It talks about “the flexibility of Indian ethics committees to review a multicentric study proposal for research involving low or minimal risk, survey or studies using anonymized samples or data or low or minimal risk public health research.” The reporting of approvals received and applied for and the procedure of written, informed consent followed must be clear and transparent. 10 , 19

The use of incentives in surveys is also an ethical concern. 20 The different of incentives that can be used are monetary or non-monetary. Monetary incentives are usually discouraged as these may attract the wrong population due to the temptation of the monetary benefit. However, monetary incentives have been seen to make survey receive greater traction even though this is yet to proven. Monetary incentives are not only provided in terms of cash or cheque but also in the form of free articles, discount coupons, phone cards, e-money or cashback value. 21 These methods though tempting must be seldom used. If used, their use must be disclosed and justified in the report. The use of non-monetary incentives like a meeting with a famous personality or access to restricted and authorized areas. These can also help pique the interest of the respondents.

DESIGNING A SURVEY

As mentioned earlier, the design of a survey is reflective of the skill of the investigator curating it. 22 Survey builders can be used to design an efficient survey. These offer majority of the basic features needed to construct a survey, free of charge. Therefore, surveys can be designed from scratch, using pre-designed templates or by using previous survey designs as inspiration. Taking surveys could be made convenient by using the various aids available ( Table 1 ). Moreover, even the investigator should be mindful of the unintended response effects of ordering and context of survey questions. 23

Surveys using clear, unambiguous, simple and well-articulated language record precise answers. 24 A well-designed survey accounts for the culture, language and convenience of the target demographic. The age, region, country and occupation of the target population is also considered before constructing a survey. Consistency is maintained in the terms used in the survey and abbreviations are avoided to allow the respondents to have a clear understanding of the question being answered. Universal abbreviations or previously indexed abbreviations maintain the unambiguity of the survey.

Surveys beginning with broad, easy and non-specific questions as compared to sensitive, tedious and non-specific ones receive more accurate and complete answers. 25 Questionnaires designed such that the relatively tedious and long questions requiring the respondent to do some nit-picking are placed at the end improves the response rate of the survey. This prevents the respondent to be discouraged to answer the survey at the beginning itself and motivates the respondent to finish the survey at the end. All questions must provide a non-response option and all questions should be made mandatory to increase completeness of the survey. Questions can be framed in close-ended or open-ended fashion. However, close-ended questions are easier to analyze and are less tedious to answer by the respondent and therefore must be the main component in a survey. Open-ended questions have minimal use as they are tedious, take time to answer and require fine articulation of one's thoughts. Also, their minimal use is advocated because the interpretation of such answers requires dedication in terms of time and energy due to the diverse nature of the responses which is difficult to promise owing to the large sample sizes. 26 However, whenever the closed choices do not cover all probabilities, an open answer choice must be added. 27 , 28

Screening questions to meet certain criteria to gain access to the survey in cases where inclusion criteria need to be established to maintain authenticity of target demographic. Similarly, logic function can be used to apply an exclusion. This allows clean and clear record of responses and makes the job of an investigator easier. The respondents can or cannot have the option to return to the previous page or question to alter their answer as per the investigator's preference.

The range of responses received can be reduced in case of questions directed towards the feelings or opinions of people by using slider scales, or a Likert scale. 29 , 30 In questions having multiple answers, check boxes are efficient. When a large number of answers are possible, dropdown menus reduce the arduousness. 31 Matrix scales can be used to answer questions requiring grading or having a similar range of answers for multiple conditions. Maximum respondent participation and complete survey responses can be ensured by reducing the survey time. Quiz mode or weighted modes allow the respondent to shuffle between questions and allows scoring of quizzes and can be used to complement other weighted scoring systems. 32 A flowchart depicting a survey construct is presented as Fig. 1 .

An external file that holds a picture, illustration, etc.
Object name is jkms-35-e398-g001.jpg

Survey validation

Validation testing though tedious and meticulous, is worthy effort as the accuracy of a survey is determined by its validity. It is indicative of the of the sample of the survey and the specificity of the questions such that the data acquired is streamlined to answer the questions being posed or to determine a hypothesis. 33 , 34 Face validation determines the mannerism of construction of questions such that necessary data is collected. Content validation determines the relation of the topic being addressed and its related areas with the questions being asked. Internal validation makes sure that the questions being posed are directed towards the outcome of the survey. Finally, Test – retest validation determines the stability of questions over a period of time by testing the questionnaire twice and maintaining a time interval between the two tests. For surveys determining knowledge of respondents pertaining to a certain subject, it is advised to have a panel of experts for undertaking the validation process. 2 , 35

Reliability testing

If the questions in the survey are posed in a manner so as to elicit the same or similar response from the respondents irrespective of the language or construction of the question, the survey is said to be reliable. It is thereby, a marker of the consistency of the survey. This stands to be of considerable importance in knowledge-based researches where recall ability is tested by making the survey available for answering by the same participants at regular intervals. It can also be used to maintain authenticity of the survey, by varying the construction of the questions.

Designing a cover letter

A cover letter is the primary means of communication with the respondent, with the intent to introduce the respondent to the survey. A cover letter should include the purpose of the survey, details of those who are conducting it, including contact details in case clarifications are desired. It should also clearly depict the action required by the respondent. Data anonymization may be crucial to many respondents and is their right. This should be respected in a clear description of the data handling process while disseminating the survey. A good cover letter is the key to building trust with the respondent population and can be the forerunner to better response rates. Imparting a sense of purpose is vital to ideationally incentivize the respondent population. 36 , 37 Adding the credentials of the team conducting the survey may further aid the process. It is seen that an advance intimation of the survey prepares the respondents while improving their compliance.

The design of a cover letter needs much attention. It should be captivating, clear, precise and use a vocabulary and language specific to the target population for the survey. Active voice should be used to make a greater impact. Crowding of the details must be avoided. Using italics, bold fonts or underlining may be used to highlight critical information. the tone ought to be polite, respectful, and grateful in advance. The use of capital letters is at best avoided, as it is surrogate for shouting in verbal speech and may impart a bad taste.

The dates of the survey may be intimated, so the respondents may prepare themselves for taking it at a time conducive to them. While, emailing a closed group in a convenience sampled survey, using the name of the addressee may impart a customized experience and enhance trust building and possibly compliance. Appropriate use of salutations like Mr./Ms./Mrs. may be considered. Various portals such as SurveyMonkey allow the researchers to save an address list on the website. These may then be reached out using an embedded survey link from a verified email address to minimize bouncing back of emails.

The body of the cover letter must be short, crisp and not exceed 2–3 paragraphs under idea circumstances. Ernest efforts to protect confidentiality may go a long way in enhancing response rates. 38 While it is enticing to provide incentives to enhance response, these are best avoided. 38 , 39 In cases when indirect incentives are offered, such as provision of results of the survey, these may be clearly stated in the cover letter. Lastly, a formal closing note with the signatures of the lead investigator are welcome. 38 , 40

Designing questions

Well-constructed questionnaires are essentially the backbone of successful survey-based studies. With this type of research, the primary concern is the adequate promotion and dissemination of the questionnaire to the target population. The careful of selection of sample population, therefore, needs to be with minimal flaws. The method of conducting survey is an essential determinant of the response rate observed. 41 Broadly, surveys are of two types: closed and open. Depending on the sample population the method of conducting the survey must be determined.

Various doctors use their own patients as the target demographic, as it improves compliance. However, this is effective in surveys aiming towards a geographically specific, fairly common disease as the sample size needs to be adequate. Response bias can be identified by the data collected from respondent and non-respondent groups. 42 , 43 Therefore, to choose a target population whose database of baseline characteristics is already known is more efficacious. In cases of surveys focused on patients having a rare group of diseases, online surveys or e-surveys can be conducted. Data can also be gathered from the multiple national organizations and societies all over the world. 44 , 45 Computer generated random selection can be done from this data to choose participants and they can be reached out to using emails or social media platforms like WhatsApp and LinkedIn. In both these scenarios, closed questionnaires can be conducted. These have restricted access either through a URL link or through e-mail.

In surveys targeting an issue faced by a larger demographic (e.g. pandemics like the COVID-19, flu vaccines and socio-political scenarios), open surveys seem like the more viable option as they can be easily accessed by majority of the public and ensures large number of responses, thereby increasing the accuracy of the study. Survey length should be optimal to avoid poor response rates. 25 , 46

SURVEY DISSEMINATION

Uniform distribution of the survey ensures equitable opportunity to the entire target population to access the questionnaire and participate in it. While deciding the target demographic communities should be studied and the process of “lurking” is sometimes practiced. Multiple sampling methods are available ( Fig. 1 ). 47

Distribution of survey to the target demographic could be done using emails. Even though e-mails reach a large proportion of the target population, an unknown sender could be blocked, making the use of personal or a previously used email preferable for correspondence. Adding a cover letter along with the invite adds a personal touch and is hence, advisable. Some platforms allow the sender to link the survey portal with the sender's email after verifying it. Noteworthily, despite repeated email reminders, personal communication over the phone or instant messaging improved responses in the authors' experience. 48 , 49

Distribution of the survey over other social media platforms (SMPs, namely WhatsApp, Facebook, Instagram, Twitter, LinkedIn etc.) is also practiced. 50 , 51 , 52 Surveys distributed on every available platform ensures maximal outreach. 53 Other smartphone apps can also be used for wider survey dissemination. 50 , 54 It is important to be mindful of the target population while choosing the platform for dissemination of the survey as some SMPs such as WhatsApp are more popular in India, while others like WeChat are used more widely in China, and similarly Facebook among the European population. Professional accounts or popular social accounts can be used to promote and increase the outreach for a survey. 55 Incentives such as internet giveaways or meet and greets with their favorite social media influencer have been used to motivate people to participate.

However, social-media platforms do not allow calculation of the denominator of the target population, resulting in inability to gather the accurate response rate. Moreover, this method of collecting data may result in a respondent bias inherent to a community that has a greater online presence. 43 The inability to gather the demographics of the non-respondents (in a bid to identify and prove that they were no different from respondents) can be another challenge in convenience sampling, unlike in cohort-based studies.

Lastly, manually filling of surveys, over the telephone, by narrating the questions and answer choices to the respondents is used as the last-ditch resort to achieve a high desired response rate. 56 Studies reveal that surveys released on Mondays, Fridays, and Sundays receive more traction. Also, reminders set at regular intervals of time help receive more responses. Data collection can be improved in collaborative research by syncing surveys to fill out electronic case record forms. 57 , 58 , 59

Data anonymity refers to the protection of data received as a part of the survey. This data must be stored and handled in accordance with the patient privacy rights/privacy protection laws in reference to surveys. Ethically, the data must be received on a single source file handled by one individual. Sharing or publishing this data on any public platform is considered a breach of the patient's privacy. 11 In convenience sampled surveys conducted by e-mailing a predesignated group, the emails shall remain confidential, as inadvertent sharing of these as supplementary data in the manuscript may amount to a violation of the ethical standards. 60 A completely anonymized e-survey discourages collection of Internet protocol addresses in addition to other patient details such as names and emails.

Data anonymity gives the respondent the confidence to be candid and answer the survey without inhibitions. This is especially apparent in minority groups or communities facing societal bias (sex workers, transgenders, lower caste communities, women). Data anonymity aids in giving the respondents/participants respite regarding their privacy. As the respondents play a primary role in data collection, data anonymity plays a vital role in survey-based research.

DATA HANDLING OF SURVEYS

The data collected from the survey responses are compiled in a .xls, .csv or .xlxs format by the survey tool itself. The data can be viewed during the survey duration or after its completion. To ensure data anonymity, minimal number of people should have access to these results. The data should then be sifted through to invalidate false, incorrect or incomplete data. The relevant and complete data should then be analyzed qualitatively and quantitatively, as per the aim of the study. Statistical aids like pie charts, graphs and data tables can be used to report relative data.

ANALYSIS OF SURVEY DATA

Analysis of the responses recorded is done after the time made available to answer the survey is complete. This ensures that statistical and hypothetical conclusions are established after careful study of the entire database. Incomplete and complete answers can be used to make analysis conditional on the study. Survey-based studies require careful consideration of various aspects of the survey such as the time required to complete the survey. 61 Cut-off points in the time frame allow authentic answers to be recorded and analyzed as compared to disingenuous completed questionnaires. Methods of handling incomplete questionnaires and atypical timestamps must be pre-decided to maintain consistency. Since, surveys are the only way to reach people especially during the COVID-19 pandemic, disingenuous survey practices must not be followed as these will later be used to form a preliminary hypothesis.

REPORTING SURVEY-BASED RESEARCH

Reporting the survey-based research is by far the most challenging part of this method. A well-reported survey-based study is a comprehensive report covering all the aspects of conducting a survey-based research.

The design of the survey mentioning the target demographic, sample size, language, type, methodology of the survey and the inclusion-exclusion criteria followed comprises a descriptive report of a survey-based study. Details regarding the conduction of pilot-testing, validation testing, reliability testing and user-interface testing add value to the report and supports the data and analysis. Measures taken to prevent bias and ensure consistency and precision are key inclusions in a report. The report usually mentions approvals received, if any, along with the written, informed, consent taken from the participants to use the data received for research purposes. It also gives detailed accounts of the different distribution and promotional methods followed.

A detailed account of the data input and collection methods along with tools used to maintain the anonymity of the participants and the steps taken to ensure singular participation from individual respondents indicate a well-structured report. Descriptive information of the website used, visitors received and the externally influencing factors of the survey is included. Detailed reporting of the post-survey analysis including the number of analysts involved, data cleaning required, if any, statistical analysis done and the probable hypothesis concluded is a key feature of a well-reported survey-based research. Methods used to do statistical corrections, if used, should be included in the report. The EQUATOR network has two checklists, “The Checklist for Reporting Results of Internet E-Surveys” (CHERRIES) statement and “ The Journal of Medical Internet Research ” (JMIR) checklist, that can be utilized to construct a well-framed report. 62 , 63 Importantly, self-reporting of biases and errors avoids the carrying forward of false hypothesis as a basis of more advanced research. References should be cited using standard recommendations, and guided by the journal specifications. 64

CHOOSING A TARGET JOURNAL FOR SURVEY-BASED RESEARCH

Surveys can be published as original articles, brief reports or as a letter to the editor. Interestingly, most modern journals do not actively make mention of surveys in the instructions to the author. Thus, depending on the study design, the authors may choose the article category, cohort or case-control interview or survey-based study. It is prudent to mention the type of study in the title. Titles albeit not too long, should not exceed 10–12 words, and may feature the type of study design for clarity after a semicolon for greater citation potential.

While the choice of journal is largely based on the study subject and left to the authors discretion, it may be worthwhile exploring trends in a journal archive before proceeding with submission. 65 Although the article format is similar across most journals, specific rules relevant to the target journal may be followed for drafting the article structure before submission.

RETRACTION OF ARTICLES

Articles that are removed from the publication after being released are retracted articles. These are usually retracted when new discrepancies come to light regarding, the methodology followed, plagiarism, incorrect statistical analysis, inappropriate authorship, fake peer review, fake reporting and such. 66 A sufficient increase in such papers has been noticed. 67

We carried out a search of “surveys” on Retraction Watch on 31st August 2020 and received 81 search results published between November 2006 to June 2020, out of which 3 were repeated. Out of the 78 results, 37 (47.4%) articles were surveys, 23 (29.4%) showed as unknown types and 18 (23.2%) reported other types of research. ( Supplementary Table 1 ). Fig. 2 gives a detailed description of the causes of retraction of the surveys we found and its geographic distribution.

An external file that holds a picture, illustration, etc.
Object name is jkms-35-e398-g002.jpg

A good survey ought to be designed with a clear objective, the design being precise and focused with close-ended questions and all probabilities included. Use of rating scales, multiple choice questions and checkboxes and maintaining a logical question sequence engages the respondent while simplifying data entry and analysis for the investigator. Conducting pilot-testing is vital to identify and rectify deficiencies in the survey design and answer choices. The target demographic should be defined well, and invitations sent accordingly, with periodic reminders as appropriate. While reporting the survey, maintaining transparency in the methods employed and clearly stating the shortcomings and biases to prevent advocating an invalid hypothesis.

Disclosure: The authors have no potential conflicts of interest to disclose.

Author Contributions:

  • Conceptualization: Gaur PS, Zimba O, Agarwal V, Gupta L.
  • Visualization: Gaur PS, Zimba O, Agarwal V, Gupta L.
  • Writing - original draft: Gaur PS, Gupta L.

SUPPLEMENTARY MATERIAL

Reporting survey based research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Doing Survey Research | A Step-by-Step Guide & Examples

Doing Survey Research | A Step-by-Step Guide & Examples

Published on 6 May 2022 by Shona McCombes . Revised on 10 October 2022.

Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps:

  • Determine who will participate in the survey
  • Decide the type of survey (mail, online, or in-person)
  • Design the survey questions and layout
  • Distribute the survey
  • Analyse the responses
  • Write up the results

Surveys are a flexible method of data collection that can be used in many different types of research .

Table of contents

What are surveys used for, step 1: define the population and sample, step 2: decide on the type of survey, step 3: design the survey questions, step 4: distribute the survey and collect responses, step 5: analyse the survey results, step 6: write up the survey results, frequently asked questions about surveys.

Surveys are used as a method of gathering data in many different fields. They are a good choice when you want to find out about the characteristics, preferences, opinions, or beliefs of a group of people.

Common uses of survey research include:

  • Social research: Investigating the experiences and characteristics of different social groups
  • Market research: Finding out what customers think about products, services, and companies
  • Health research: Collecting data from patients about symptoms and treatments
  • Politics: Measuring public opinion about parties and policies
  • Psychology: Researching personality traits, preferences, and behaviours

Surveys can be used in both cross-sectional studies , where you collect data just once, and longitudinal studies , where you survey the same sample several times over an extended period.

Prevent plagiarism, run a free check.

Before you start conducting survey research, you should already have a clear research question that defines what you want to find out. Based on this question, you need to determine exactly who you will target to participate in the survey.

Populations

The target population is the specific group of people that you want to find out about. This group can be very broad or relatively narrow. For example:

  • The population of Brazil
  • University students in the UK
  • Second-generation immigrants in the Netherlands
  • Customers of a specific company aged 18 to 24
  • British transgender women over the age of 50

Your survey should aim to produce results that can be generalised to the whole population. That means you need to carefully define exactly who you want to draw conclusions about.

It’s rarely possible to survey the entire population of your research – it would be very difficult to get a response from every person in Brazil or every university student in the UK. Instead, you will usually survey a sample from the population.

The sample size depends on how big the population is. You can use an online sample calculator to work out how many responses you need.

There are many sampling methods that allow you to generalise to broad populations. In general, though, the sample should aim to be representative of the population as a whole. The larger and more representative your sample, the more valid your conclusions.

There are two main types of survey:

  • A questionnaire , where a list of questions is distributed by post, online, or in person, and respondents fill it out themselves
  • An interview , where the researcher asks a set of questions by phone or in person and records the responses

Which type you choose depends on the sample size and location, as well as the focus of the research.

Questionnaires

Sending out a paper survey by post is a common method of gathering demographic information (for example, in a government census of the population).

  • You can easily access a large sample.
  • You have some control over who is included in the sample (e.g., residents of a specific region).
  • The response rate is often low.

Online surveys are a popular choice for students doing dissertation research , due to the low cost and flexibility of this method. There are many online tools available for constructing surveys, such as SurveyMonkey and Google Forms .

  • You can quickly access a large sample without constraints on time or location.
  • The data is easy to process and analyse.
  • The anonymity and accessibility of online surveys mean you have less control over who responds.

If your research focuses on a specific location, you can distribute a written questionnaire to be completed by respondents on the spot. For example, you could approach the customers of a shopping centre or ask all students to complete a questionnaire at the end of a class.

  • You can screen respondents to make sure only people in the target population are included in the sample.
  • You can collect time- and location-specific data (e.g., the opinions of a shop’s weekday customers).
  • The sample size will be smaller, so this method is less suitable for collecting data on broad populations.

Oral interviews are a useful method for smaller sample sizes. They allow you to gather more in-depth information on people’s opinions and preferences. You can conduct interviews by phone or in person.

  • You have personal contact with respondents, so you know exactly who will be included in the sample in advance.
  • You can clarify questions and ask for follow-up information when necessary.
  • The lack of anonymity may cause respondents to answer less honestly, and there is more risk of researcher bias.

Like questionnaires, interviews can be used to collect quantitative data : the researcher records each response as a category or rating and statistically analyses the results. But they are more commonly used to collect qualitative data : the interviewees’ full responses are transcribed and analysed individually to gain a richer understanding of their opinions and feelings.

Next, you need to decide which questions you will ask and how you will ask them. It’s important to consider:

  • The type of questions
  • The content of the questions
  • The phrasing of the questions
  • The ordering and layout of the survey

Open-ended vs closed-ended questions

There are two main forms of survey questions: open-ended and closed-ended. Many surveys use a combination of both.

Closed-ended questions give the respondent a predetermined set of answers to choose from. A closed-ended question can include:

  • A binary answer (e.g., yes/no or agree/disagree )
  • A scale (e.g., a Likert scale with five points ranging from strongly agree to strongly disagree )
  • A list of options with a single answer possible (e.g., age categories)
  • A list of options with multiple answers possible (e.g., leisure interests)

Closed-ended questions are best for quantitative research . They provide you with numerical data that can be statistically analysed to find patterns, trends, and correlations .

Open-ended questions are best for qualitative research. This type of question has no predetermined answers to choose from. Instead, the respondent answers in their own words.

Open questions are most common in interviews, but you can also use them in questionnaires. They are often useful as follow-up questions to ask for more detailed explanations of responses to the closed questions.

The content of the survey questions

To ensure the validity and reliability of your results, you need to carefully consider each question in the survey. All questions should be narrowly focused with enough context for the respondent to answer accurately. Avoid questions that are not directly relevant to the survey’s purpose.

When constructing closed-ended questions, ensure that the options cover all possibilities. If you include a list of options that isn’t exhaustive, you can add an ‘other’ field.

Phrasing the survey questions

In terms of language, the survey questions should be as clear and precise as possible. Tailor the questions to your target population, keeping in mind their level of knowledge of the topic.

Use language that respondents will easily understand, and avoid words with vague or ambiguous meanings. Make sure your questions are phrased neutrally, with no bias towards one answer or another.

Ordering the survey questions

The questions should be arranged in a logical order. Start with easy, non-sensitive, closed-ended questions that will encourage the respondent to continue.

If the survey covers several different topics or themes, group together related questions. You can divide a questionnaire into sections to help respondents understand what is being asked in each part.

If a question refers back to or depends on the answer to a previous question, they should be placed directly next to one another.

Before you start, create a clear plan for where, when, how, and with whom you will conduct the survey. Determine in advance how many responses you require and how you will gain access to the sample.

When you are satisfied that you have created a strong research design suitable for answering your research questions, you can conduct the survey through your method of choice – by post, online, or in person.

There are many methods of analysing the results of your survey. First you have to process the data, usually with the help of a computer program to sort all the responses. You should also cleanse the data by removing incomplete or incorrectly completed responses.

If you asked open-ended questions, you will have to code the responses by assigning labels to each response and organising them into categories or themes. You can also use more qualitative methods, such as thematic analysis , which is especially suitable for analysing interviews.

Statistical analysis is usually conducted using programs like SPSS or Stata. The same set of survey data can be subject to many analyses.

Finally, when you have collected and analysed all the necessary data, you will write it up as part of your thesis, dissertation , or research paper .

In the methodology section, you describe exactly how you conducted the survey. You should explain the types of questions you used, the sampling method, when and where the survey took place, and the response rate. You can include the full questionnaire as an appendix and refer to it in the text if relevant.

Then introduce the analysis by describing how you prepared the data and the statistical methods you used to analyse it. In the results section, you summarise the key results from your analysis.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviours. It is made up of four or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with five or seven possible responses, to capture their degree of agreement.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyse your data.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analysing data from people using questionnaires.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2022, October 10). Doing Survey Research | A Step-by-Step Guide & Examples. Scribbr. Retrieved 29 August 2024, from https://www.scribbr.co.uk/research-methods/surveys/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, qualitative vs quantitative research | examples & methods, construct validity | definition, types, & examples, what is a likert scale | guide & examples.

research paper survey results

How to Build a Survey Results Report

How to Build a Survey Results Report

Surveys offer companies a ton of quantitative and qualitative feedback into their customer experience .

Those presenting the survey findings need to do so in a readable, succinct way. This is where a survey report becomes handy. A survey report can be shared with a company’s stakeholders leaders, other integral departments (marketing, PR, advertising, and sales), and various teammates.

A survey report pulls any key data and important findings to create a structured story around various issues. The report also offers actionable steps to resolve the issues at hand and helps companies proactively prepare for the future.

The guide below will discuss the five necessary steps to creating a condensed and thorough survey report that audience members will find interesting and useful. For a quick example of a survey result report see our, visual below .

how-to-analyze-survey-results

1. Use A Structured Plan

Like a customer journey map , a survey report needs to have a structured plan. Readers want a formatted structure they can easily follow and jump from slide to slide or page to page without feeling lost. Below is the ideal survey structure:

1. Title Page

The title page should include the following: A short and engaging title, the publication and/or release date, names of those responsible for the report, and a one or two-sentence description.

2. Table of Contents

A table of contents gives the reader a quick overview of the report and allows them to quickly locate sections.

3. Executive Summary

The executive summary is one of the most important sections of the report. It summarizes the report’s main findings and proposes the next steps. Many people only read the executive summary.

4. Background

The background explains the impetus and story behind the survey. It states the hypothesis, question, or issues at hand for why the research was conducted and how the results plan to be used.

5. Survey Method

This section reviews who the target audience was and who the survey actually included. It also reviews how surveyors contacted respondents and the process of data collection. This is often a more technical and detailed section.

6. Survey Results

The survey results section is the meat and potatoes of the report. It provides an overarching theme of the report and underscores any statistical findings or significant takeaways.

7. Appendices

Similar to the methodology section, the appendices section will contain technical data and information about the data collection and analysis process .

2. Visualize The Data

A ton of numbers and statistics aren’t appealing or friendly to the average person. It’s best to present the data in a visually appealing way via graphs and charts.

Interactive dashboards are also a nice option if the report is being sent digitally. These dashboards allow readers to quickly glance over findings and to play with variables to see changes in the data.

Our interactive dashboards are great for organizing data and sending customized reports to tell the right story.

6-ways-to-display-data

Which Data Visualization Is Best For When

Reports can show data in several ways. However, different visualizations work best for different data sets. The suggestions below go over the benefits of each data visualization. Note, these suggestions aren’t rigid. There’s usually flexibility for how to display data.

1. Bar Graph

A bar graph is best used when one data point is an outlier compared to the other data clusters. It is also useful to display negative numbers.

2. Line Graph

A line graph shows progress or trends over time. Reports should use it when there’s a continuous data set or multiple categories of data to portray.

3. Pie Chart

Pie charts contain static numbers that are a percentage of a whole. They’re best for displaying comparisons. Note, the total sum of all segments should equal 100%.

A map displays geographically related data. A survey report should show the proportion of data in each region. For instance, the specific location that the majority of a company’s users are coming from.

A gauge usually depicts a single value such as a company’s average Net Promoter Score (NPS) . Use gauges to highlight extremely relevant figures.

6. Scatter Plot

A scatter plot, also known as a scattergram chart, portrays the relationship between two variables. It’s best when lots of data points exist. A scatter plot helps to reveal distribution trends and outliers.

3. Keep The Copy Simple

keep-copy-simple-customer-feedback

A survey report can quickly become too detailed. A good way to avoid overwhelming the audience is to keep the copy brief and simple. Each sentence within the survey report should give the reader new knowledge.

Tabular copy (text in graphs, charts, and tables) needs to be extremely short. The main purpose of the text in the data visualizations and dashboards is to label the data or to serve as a title, subheader, or axis label.

Survey report writers do have more leeway when it comes to headlines. Report headlines should be short, but catchy. Think of headliners as tweets that grab consumer’s attention.

Example 1: Company X increased its Net Promoter Score by 15 points last quarter, causing their Customer Lifetime Value to jump by an average of $281.

Example 2: The CX department lowered its Customer Effort Score (CES) by an average of 36% after the group training.

Lastly, before submitting or presenting a report, make sure it’s proofread and correctly punctuated. Ideally, at least two or more individuals will have reviewed and edited the piece before the final draft is submitted or shared with others.

4. Tell A Unique Story

The executive summary combined with the background section should create an engaging story. Remember, the survey report addresses an issue, takes in feedback, and then suggests solutions.

Here’s an example of how HelloFresh used surveys and survey reports to create an appealing story:

  • Character: HelloFresh, a meal delivery service
  • Issue or problem: HelloFresh needed to better understand the customer and to synthesize their customer feedback into digestible reports and presentations. These reports also needed to be shared with stakeholders, customer service reps, and both the PR and social media teams.
  • Findings from market research: HelloFresh uncovered significant differences in food taste on a hyper-geographical level, e.g., English Canadian vs French Canadian in Quebec and East vs West Berlin. They also found that customers wanted to know what was in their recipes and they requested more check-ins during the ordering process.
  • Solutions: They modified their recipes to their consumers’ preferences and added touch points to keep customers informed and up-to-date.

5. Types Of Survey Results Reports

Part of conducting a thorough survey and creating an engrossing report means selecting the best type of survey. Companies must choose a survey type that will give the best customer feedback based on the problems or issues they’re facing.

The list below will help companies determine which type of survey will give them the most reliable results.

1. Real-Time Summary Report

A real-time summary report is typically an interactive dashboard that gives live feedback via charts, statistics, and graphs from the collected survey data.

Best for: Both qualitative and quantitative surveys as they report in real-time.

2. Open-Ended Text Reports

Open-ended text reports use text analytics and sentiment analysis tools to identify patterns or themes from customers’ survey responses.

Best for: Qualitative surveys that contain open-ended responses.

3. Report Scheduler

A report scheduler automatically sends a survey result or report to other users or departments within an organization. Think of them as check-ins. Typically, a CX department uses schedulers to prompt reports for long-running surveys that require check-ins and monitoring.

Best for: Quantitative surveys. A report scheduler can generate overarching quick reports from the collected data.

4. Gap Analysis Report

A gap analysis is best used to analyze two scale-type questions. For instance, if a customer had two interactions with different customer representatives, a customer satisfaction survey could ask them to rate the satisfaction of each. The gap analysis report would then compare satisfaction levels between the two of them.

Best for: Both qualitative and quantitative surveys. A gap analysis report will almost always be a quantitative survey since it should ask the respondent to give a numbered rating. A gap analysis report can become qualitative if a second open-ended question is asked, such as: what made you rate (Employee Name) higher or lower?

5. Spotlight Report

A spotlight report hones in on either one or a target group of respondents. These responses can then be compared to the overall survey responses.

Best for: Both qualitative and quantitative surveys can use spotlight reports since it’s highlighting a small group or individuals’ responses.

6. Trend Analysis Report

A trend analysis report will show significant data currents or tendencies from the past few weeks, months, or even years. A trend analysis report can help businesses modify their surveys by reviewing response rates. Or they can be used to make large overarching themes based on customer feedback.

Best for: Both qualitative and quantitative surveys. Typically, trend analysis reports will offer more insight into quantitative surveys since they’re best for reviewing a large amount of data.

However, with a text analytics tool , qualitative data trends can also be reported on since the text analytics tool automatically creates lots of sentiment data points.

The visual below depicts ways companies best visualize their data in a survey results report .

customer-satisfaction-survey-report

Both quantitative and qualitative surveys bring in a plethora of insight and data about the customers, specifically, the customer experience. Survey results reports are vital for companies who want to succinctly share findings with both internal and external stakeholders.

Book a Demo to learn about how Chattermill can help you better understand your customer surveys.

Related articles.

research paper survey results

How HelloFresh Leverages Customer Insights to Improve Operational Efficiency for Supply Chain Management

We spoke with Stefan Platteau, Associate Director of Global Product Strategy and Analytics, to learn how Chattermill helped HelloFresh optimize its Operations, Logistics, and Supply Chain Management. 

An image of HelloFresh and Chattermill logos

How HelloFresh Turned Product Feedback into New Revenue Streams, Growing to a €7B Business

Learn how HelloFresh partnered with Chattermill to make strategic product decisions based on customer feedback, and drive more efficient business growth.

research paper survey results

Video Panel: How top brands use AI to impact customer experience

We were joined by more than 35 customers, partners, and friends who wanted to make sure that their choice of AI drives their business outcomes.

See Chattermill in action

Understand the voice of your customers in realtime with Customer Feedback Analytics from Chattermill.

  • Data Visualizations
  • Most Recent
  • Presentations
  • Infographics
  • Forms and Surveys
  • Video & Animation
  • Case Studies
  • Design for Business
  • Digital Marketing
  • Design Inspiration
  • Visual Thinking
  • Product Updates
  • Visme Webinars
  • Artificial Intelligence

How to Analyze and Present Survey Results

How to Analyze and Present Survey Results

Written by: Orana Velarde

An illustration of a woman interacting with a survey question that's being visualized with a pictograph in the backdrop.

Are your survey results sitting in a file on your computer waiting to be analyzed? Or maybe there’s a stack of filled out forms somewhere in your office?

It’s time to get that survey data ready to present to stakeholders or members of your team.

Visme has all the tools you need to visualize your survey data in a report , infographic, printable document or an online, interactive design.

In this article, we’ll help you understand what a survey is and how to conduct one. We’ll also show you how to analyze survey data and present it with visuals.

Let’s get started.

Jump to the Section You Want

What is a survey, the 4 best tools for creating surveys, how to analyze survey results, how to present survey results with visme.

A survey is a study that involves asking a group of people all the same questions. It’s a research activity that aims to collect data about a particular topic.

A survey usually consists of at least one question and can be as long as tens of questions. The length of your survey depends on the nature of the research.

Surveys can be categorized into three main types:

  • Cross-sectional. This survey is of a small section of a larger population within a small time frame. These are usually short and easy to answer quickly.
  • Longitudinal. These are for collecting survey data over a longer period of time to learn about a shift or transformation in opinion or thought about a particular topic. The people surveyed are the same every time.
  • Retrospective. In this case, the survey includes questions about events that happened in the past.

When it comes to survey results, your data can be either qualitative or quantitative.

  • Quantitative. This data is collected from close-ended questions. These can be numerical answers, or yes and no answers. Quantitative surveys are easier to analyze and chart because of their nature.
  • Qualitative. This data is collected from open-ended questions. Qualitative surveys usually ask participants their opinion about a particular topic. The answers can be harder to analyze and chart, as they need to be grouped and simplified first.

The survey results infographic below is from a quantitative survey where participants simply chose their favorites from a list. Customize it to use for your own data.

research paper survey results

Surveys are conducted in different ways, depending on the needs of the surveyor and proximity of participants. While some surveys are conducted face-to-face, others are carried out via telephone, or self-administered digitally or on paper.

Surveys can be conducted for lots of different reasons, such as:

  • Market research for businesses and brands
  • Election polls or intended participation
  • Customer satisfaction and/or suggestions
  • User research for UX design
  • Brand tracking studies
  • And much more…

To conduct a successful survey, you need the right tools. For face-to-face surveys, you’ll need a group of people who will visit participants, enough printed survey copies or a way to record spoken answers.

For telephone surveys, you’ll need a group of people who can call participants over the phone. You’ll also need a computer program or printed survey question forms where the surveyor can record the data.

For online surveys, you can use a number of different tools. Below are our favorites:

  • Typeform. Create a Typeform directly on their website or right inside your Visme dashboard. To collect and analyze the survey data from a Typeform, download it as an Excel or CSV file. For more than 20 answers, connect the Google Sheets integration to your Typeform.
  • Google Forms. Collecting survey data in a Google Form is easy. The answers are instantly added to a spreadsheet that you can then further analyze and present with Visme later.
  • SurveyMonkey. Creating a survey in SurveyMonkey isn’t just easy, they also offer data analysis tools for your results like filtering and grouping. Furthermore, Survey Monkey offers simple presenting tools for your data. You can also download the results as a CSV or Excel file like with many other tools, and then present visually with Visme.
  • Stripo. With this tool, you can create a survey directly in an email and save all your results to analyze. Get an example of what your survey email could look like.

How to Create a Typeform Survey With Visme

With Visme, not only can you present your survey results, you can also create a survey! With our Typeform integration, creating a survey in a Visme project is as easy as inserting a new chart.

A screenshot of how you can access the Typeform integration in Visme.

  • Step 1: Create an area for your Typeform — a new block, slide or section — and click on the Apps button on the left tools panel.
  • Step 2: Click on the Typeform button to connect your Typeform account.
  • Step 3: Once you’ve signed in to your Typeform account, you can import any Typeform that you’ve previously created as long as it's not private or unpublished.

When you present survey data results with visuals, the trends and conclusions are easier and faster to understand.

But before you can do that, you’ll first need to analyze your results. The analysis process depends on the type of survey conducted and how the data was collected.

For example, simple online quantitative surveys can be fed directly to a spreadsheet, while qualitative surveys conducted face-to-face will need considerably more data entry work.

According to Thematic , these are the 5 steps you need to follow for best analysis results.

research paper survey results

As you can see, the analysis starts even before creating the survey. This helps make sure that you are asking the right questions.

The data must then be organized into a filterable spreadsheet or table. The most common survey software available for analysis work is Microsoft Excel or Google Sheets.

To analyze more complex data , another great tool is Tableau — a powerful analysis and visualization tool. In fact, for large survey data, we suggest you use a mix of Tableau visualizations embedded into your Visme project along with our signature data widgets.

Now that we’ve looked at all the steps involved in conducting a survey, collecting data and analyzing it, let’s find out how to present your survey results with visuals.

Presenting survey results visually makes it easier to spot trends, arrive at conclusions and put the data to practical use. It’s essential to know how to present data to share insights with stakeholders and team members to get your message across.

You can easily make your survey data look beautiful with the help of Visme’s graph maker, data widgets and powerful integrations.

Check out the video below to learn more about how you can customize data and present it using data visualization tools in Visme.

research paper survey results

Aside from data visualization, Visme lets you create interactive reports, presentations, infographics and other designs to help you better present survey results.

To give you more ideas, here are 9 unique ways to present survey results in Visme.

1. Create a Presentation

While many times you’ll put together a document, one-pager or infographic to visualize survey results, sometimes a presentation is the perfect format.

Create a survey presentation like the one below to share your findings with your team.

2. Create a Report

A multi-page report is a great way to print out a hard copy of your survey results, and formally share it with your team, management or stakeholders.

Here’s a survey report template in Visme you can customize.

report examples - scientific finding report template

You can also share interactive versions of your report online using Visme. After you finish designing your survey results report, simply hit publish to generate a shareable URL.

3. Add a Chart or Graph

The best way to present survey results is with a chart or graph. The type of chart you choose depends on the nature of your data. Below, we’ll take a look at two common types of charts you can use to visualize and present your survey data.

Bar graphs.

If you had a smaller survey and really want to visualize one main result, this bar graph survey results template is the perfect solution.

Insert your own information so you can quickly visualize the largest bars, giving you more insight into your audience.

research paper survey results

Pie charts.

To visualize parts of a whole, a pie chart can really help to differentiate the answers that your audience gave. Look to see which responses were most popular to help you make more informed choices for your brand.

research paper survey results

4. Visualize Text With Icons

Incorporating some of your survey questions into your report helps your audience understand your results better. Take it a step further by adding relevant icons to help visualize those questions.

Customize this template with your survey information before presenting it to your team.

research paper survey results

5. Use Pictographs

Another great way to use icons in your survey results report is with pictographs, or icon arrays. Pictographs use symbols like icons and shapes to convey meaning.

Use icon arrays to visualize sections of a whole. For example, you can use icons of people to visualize population data. Need to visualize the difference between cat lovers and dog lovers? Use an array with cat icons in different colors.

Here’s an example of a survey results report that uses pictographs to visualize psychographic data among a population.

research paper survey results

6. Create an Interactive Map

One more way to present survey results is with maps. This is a great solution for visualizing geographic data. In Visme, you have several options to help you create interactive maps:

  • Choose between the world, regions, countries and states.
  • Use the eye icons to hide or show different sections.
  • Color code the sections as you wish.
  • Make sections interactive by adding popups and hover effects.

create interactive map - color code your map

7. Incorporate Creative Graphics

Get creative showcasing your results by adding graphics and illustrations that help represent your data. In the template below, we’ve used a human body to help visualize the survey results.

research paper survey results

8. Use Multiple Data Widgets

If you want to show your survey results data in a snackable format, try using data widgets. These are perfect for showing percentages and quantitative comparisons in many different styles.

The best way to use them is to visualize one question of the survey at a time. For example, use one widget for the percentage of yes answers and another for the no answers.

In this template, you can easily customize multiple widgets to visualize different kinds of results and responses.

research paper survey results

9. Embed Tableau Visualizations

Last but not least, you have the third-party embed option. With this tool, you can embed any Tableau visualization into a Visme project.

This is a great option if your data is more complex, or if you are a Tableau user who just wants to create better presentations with Visme.

To embed a Tableau into Visme, open the Media tab on the left-hand sidebar, then click on Embed Online Content. From the drop-down, select HTML.

research paper survey results

Copy and paste the HTML from your Tableau visualization and paste it into Visme. Now your Tableau is part of a complete survey results report made with Visme!

Ready to Visualize and Present Your Survey Results?

To get started with visualizing your survey results, log in to your Visme account and choose one of the survey results templates.

If you don’t have a Visme account, creating one is easy and free . Simply register with your email and you’re good to go.  Leave a comment below if you have any questions!

Create beautiful charts, graphs and data visualizations with ease.

research paper survey results

Trusted by leading brands

Capterra

Recommended content for you:

Interactive Data Visualization: Examples, Techniques & Tools

Create Stunning Content!

Design visual brand experiences for your business whether you are a seasoned designer or a total novice.

research paper survey results

About the Author

Orana is a multi-faceted creative. She is a content writer, artist, and designer. She travels the world with her family and is currently in Istanbul. Find out more about her work at oranavelarde.com

research paper survey results

  • AI & NLP
  • Churn & Loyalty
  • Customer Experience
  • Customer Journeys
  • Customer Metrics
  • Feedback Analysis
  • Product Experience
  • Product Updates
  • Sentiment Analysis
  • Surveys & Feedback Collection
  • Text Analytics
  • Try Thematic

Welcome to the community

research paper survey results

How to analyze survey data: best practices for actionable insights from survey analysis

Just started using a new survey tool ? Collected all of your survey data? Great. Confused about what to do next and how to achieve the optimal survey analysis? Don’t be.

If you’ve ever stared at an Excel sheet filled with thousands of rows of survey data and not known what to do, you’re not alone. Use this post as a guide to lead the way to execute best practice survey analysis.

Customer surveys can have a huge impact on your organization. Whether that impact is positive or negative depends on how good your survey is (no pressure). Has your survey been designed soundly ? Does your survey analysis deliver clear, actionable insights? And do you present your results to the right decision makers? If the answer to all those questions is yes, only then new opportunities and innovative strategies can be created.

What is survey analysis?

Survey analysis refers to the process of analyzing your results from customer (and other) surveys. This can, for example, be Net Promoter Score surveys that you send a few times a year to your customers.

Why do you need for best in class survey analysis?

Data on its own means nothing without proper analysis. Thus, you need to make sure your survey analysis produces meaningful results that help make decisions that ultimately improve your business.

There are multiple ways of doing this, both manual and through software, which we’ll get to later.

Types of survey data

Data exists as numerical and text data, but for the purpose of this post, we will focus on text responses here.

Close-ended questions

Closed-ended questions can be answered by a simple one-word answer, such as “yes” or “no”. They often consist of pre-populated answers for the respondent to choose from; while an open-ended question asks the respondent to provide feedback in their own words.

Closed-ended questions come in many forms such as multiple choice, drop down and ranking questions.

In this case, they don’t allow the respondent to provide original or spontaneous answers but only choose from a list of pre-selected options. Closed-ended questions are the equivalent of being offered milk or orange juice to drink instead of being asked: “What would you like to drink?”

These types of questions are designed to create data that are easily quantifiable, and easy to code, so they’re final in their nature. They also allow researchers to categorize respondents into groups based on the options they have selected.

Open-ended questions

An open-ended question is the opposite of a closed-ended question. It’s designed to produce a meaningful answer and create rich, qualitative data using the subject’s own knowledge and feelings.

Open-ended questions often begin with words such as “Why” and “How”, or sentences such as “Tell me about…”. Open-ended questions also tend to be more objective and less leading than closed-ended questions.

How to analyze survey data

How do you find meaningful answers and insights in survey responses?

To improve your survey analysis, use the following 5 steps:

  • Start with the end in mind – what are your top research questions?
  • Filter results by cross-tabulating subgroups
  • Interrogate the data
  • Analyze your results
  • Draw conclusions

1. Check off your top research questions

Go back to your main research questions which you outlined before you started your survey. Don’t have any? You should have set some out when you set a goal for your survey. (More on survey planning below).

A top research question for a business conference could be: “How did the attendees rate the conference overall?”.

The percentages in this example show how many respondents answered a particular way, or rather, how many people gave each answer as a proportion of the number of people who answered the question.

Thus, 60% or your respondents (1098 of those surveyed) are planning to return. This is the majority of people, even though almost a third are not planning to come back. Maybe there’s something you can do to convince the 11% who are not sure yet!

Survey table

2. Filter results by cross-tabulating subgroups

At the start of your survey, you will have set up goals for what you wanted to achieve and exactly which subgroups you wanted to analyze and compare against each other.

This is the time to go back to those and check how they (for example the subgroups; enterprises, small businesses, self-employed) answered, with regards to attending again next year.

For this, you can cross-tabulate, and show the answers per question for each subgroup.

how to analyze survey data

Here, you can see that most of the enterprises and the self-employed must have liked the conference as they’re wanting to come back, but you might have missed the mark with the small businesses.

By looking at other questions and interrogating the data further, you can hopefully figure out why and address this, so you have more of the small businesses coming back next year.

You can also filter your results based on specific types of respondents, or subgroups. So just look at how one subgroup (women, men) answered the question without comparing.

Then you apply the cross tab to look at different attendees to look at female enterprise attendees, female self-employed attendees etc. Just remember that your sample size will be smaller every time you slice the data this way, so check that you still have a valid enough sample size.

3. Interrogate the data

Look at your survey questions and really interrogate them. The following are some questions we use for this:

  • What are the most common responses to questions X?
  • Which responses are affecting/impacting us the most?
  • What’s different about this month/this year?
  • What did respondents in group Y say?
  • Which group of respondents are most affected by issue Z?
  • Have customers noticed our efforts in solving issue Z?
  • What do people say about Z?

For example, look at question 1 and 2. The difference between the two is that the first one returns the volume, whereas in the second one we can look at the volume relating to a particular satisfaction score. If something is very common, it may not affect the score. But if, for example, your Detractors in an NPS survey mention something a lot, that particular theme will be affecting the score in a negative way. These two questions are important to take hand in hand.

You can also compare different slices of the data, such as two different time periods, or two groups of respondents. Or, look at a particular issue or a theme, and ask questions such as “have customers noticed our efforts in solving a particular issue?”, if you’re conducting a continuous survey over multiple months or years.

For tips on how to analyze results, see below. This is a whole topic in itself, and here are our best tips. For best practice on how to draw conclusions you can find in our post  How to get meaningful, actionable insights from customer feedback .

4 best practices for analyzing survey data

Make sure you incorporate these tips in your analysis, to ensure your survey results are successful.

1. Ensure sample size is sufficient

To always make sure you have a sufficient sample size, consider how many people you need to survey in order to get an accurate result.

You most often will not be able to, and shouldn’t for practicality reasons, collect data from all of the people you want to speak to. So you’d take a sample (or subset) of the people of interest and learn what we can from that sample.

Clearly, if you are working with a larger sample size, your results will be more reliable as they will often be more precise. A larger sample size does often equate to needing a bigger budget though.

The way to get around this issue is to perform a sample size calculation before starting a survey. Then, you can have a large enough sample size to draw meaningful conclusions, without wasting time and money on sampling more than you really need.

Consider how much margin of error you’re comfortable working with first, as your sample size is always an estimate of how the overall population think and behave.

2. Statistical significance – and why it matters

How do you know you can “trust” your survey analysis ie. that you can use the answers with confidence as a basis for your decision making? In this regard, the “significant” in statistical significance refers to how accurate your data is. Or rather, that your results are not based on pure chance, but that they are in fact, representative of a sample. If your data has statistical significance, it means that to a large extent, the survey results are meaningful.

It also shows that your respondents “look like” the total population of people about whom you want to draw conclusions.

3. Focus on your insights, not the data

When presenting to your stakeholders, it’s imperative to highlight the insights derived from your data, rather than the data itself.

You’ll do yourself a disservice. Don’t even present the information from the data. Don’t wait for your team to create insights out of the data, you’ll get a better response and better feedback if you are the one that demonstrates the insights to begin with, as it goes beyond just sharing percentages and data breakouts.

4. Complement with other types of data

Don’t stop at the survey data alone. When presenting your insights, to your stakeholders or board, it’s always helpful to use different data points and which might include even personal experiences. If you have personal experience with the topic, use it! If you have qualitative research that supports the data, use it!

So, if you can overlap qualitative research findings with your quantitative data, do so.

Just be sure to let your audience know when you are showing them findings from statistically significant research and when it comes from a different source.

3 ways to code open-ended responses

When you analyze open-ended responses, you need to code them. Coding open-ended questions have 3 approaches, here’s a taster:

  • Manual coding by someone internally.   If you receive 100-200 responses per month, this is absolutely doable. The big disadvantage here is that there is a high likelihood that whoever codes your text will apply their own biases and simply not notice particular themes, because they subconsciously don’t think it’s important to monitor.
  • Outsource to an agency.  You can email the results and they would simply send back coded responses.
  • Automating the coding.  You use an algorithm to simulate the work of a professional human coder.

Whichever way you code text, you want to determine which category a comment falls under. In the below example, any comment about friends and family both fall into the second category. Then, you can easily visualize it as a bar chart.

From text to code to analysis

Code frames can also be combined with a sentiment.

Below, we’re inserting the positive and the negative layer under customer service theme.

Using code in a hierarcical coding frame

So, next, you apply this code frame. Below are snippets from a manual coding job commissioned to an agency.

In the first snippet, there’s a code frame. Under code 1, they code “Applied courses”, and under code “2 Degree in English”. In the second snippet, you can see the actual coded data, where each comment has up to 5 codes from the above code frame. You can imagine that it’s actually quite difficult to analyze data presented in this way in Excel, but it’s much easier to do it using software.

Survey data coding

The best survey analysis software tools

Traditional survey analysis is highly manual, error-prone, and subject to human bias. You may think of this as the most economical solution, but in the long run, it often ends up costing you more (due to time it takes to set up and analyze, human resource, and any errors or bias which result in inaccurate data analysis, leading to faulty interpretation of the data.  So, the question is:

Do you need software?

When you’re dealing with large amounts of data, it is impossible to manage it all properly manually. Either because there’s simply too much of it or if you’re looking to avoid any bias, or if it’s a long-term study, for example. Then, there is no other option but to use software”

On a large scale, software is ideal for analyzing survey results as you can automate the process by analyzing large amounts of data simultaneously. Plus, software has the added benefit of additional tools that add value.

Below we give just a few examples of types of software you could use to analyze survey data. Of course, these are just a few examples to illustrate the types of functions you could employ.

1. Thematic software

As an example, with Thematic’s software solution you can identify trends in sentiment and particular themes. Bias is also avoided as it is a software tool, and it doesn’t over-emphasize or ignore specific comments to come to unquantified conclusions.

Below is an example we’ve taken from the tool, to visualize some of Thematic’s features.

research paper survey results

Our visualizations tools show far more detail than word clouds, which are more typically used.

You can see two different slices of data. The blue bars are United Airlines 1 and 2-star reviews, and the orange bars are the 4 and 5-star reviews. It’s a fantastic airline, but you can identify the biggest issue as mentioned most frequently by 1-2 stars reviews, which is their flight delays. But the 4 and 5-star reviews have frequent praise for the friendliness of the airline.

You can find more features, such as Thematic’s Impact tool, Comparison, Dashboard and Themes Editor  here.

If you’re a DIY analyzer, there’s quite a bit you can do in Excel. Clearly, you do not have the sophisticated features of an online software tool, but for simple tasks, it does the trick. You can count different types of feedback (responses) in the survey, calculate percentages of the different responses survey and generate a survey report with the calculated results. For a technical overview, see  this article.

Excel table to analyze data

You can also build your own text analytics solution, and rather fast.

How to build a Text Analytics solution in 10 minutes

The following is an excerpt from a blog written by Alyona Medelyan, PhD in Natural Language Processing & Machine Learning.

As she mentions, you can type in a formula, like this one, in Excel to categorize comments into “Billing”, “Pricing” and “Ease of use”:

Categorize comments in Excel

It can take less than 10 minutes to create this, and the result is so encouraging! But wait…

Everyone loves simplicity. But in this case, simplicity sucks

Various issues can easily crop up with this approach, see the image below:

NPS category

Out of 7 comments, here only 3 were categorized correctly. “Billing” is actually about “Price”, and three other comments missed additional themes. Would you bet your customer insights on something that’s at best 50 accurate?

Developed by QRS International,  Nvivo  is a tool where you can store, organize, categorize and analyze your data and also create visualisations. Nvivo lets you store and sort data within the platform, automatically sort sentiment, themes and attribute, and exchange data with SPSS for further statistical analysis. There’s a transcription tool for quick transcription of voice data.

It’s a no-frills online tool, great for academics and researchers.

research paper survey results

4.  Interpris

Interpris is another tool from QRS International, where you can import and store free text data directly from platforms such as Survey Monkey and store all your data in one place. It has numerous features, for example automatically detecting and categorizing themes.

Favoured by government agencies and communities, it’s good for employee engagement, public opinion and community engagement surveys.

Other tools worth mentioning (for survey analysis but not open-ended questions) are SurveyMonkey, Tableau and DataCracker.

There are numerous tools on the market, and they all have different features and benefits. Choosing a tool that is right for you will depend on your needs, the amount of data and the time you have for your project and, of course,  budget. The important part to get right is to choose a tool that is reliable and provides you with quick and easy analysis, and flexible enough to adapt to your needs.

An idea is to check the list of existing clients of the product, which is often listed on their website. Crucially, you’ll want to test the tool, or at the least, get a demo from the sales team, ideally using your own data so that you can use the time to gather new insights.

research paper survey results

A few tips on survey design

Good surveys start with smart survey design. Firstly, you need to plan for survey design success. Here are a few tips:

Our 9 top tips for survey design planning

1. keep it short.

Only include questions that you are actually going to use. You might think there are lots of questions that seem useful, but they can actually negatively affect your survey results. Another reason is that often we ask redundant questions that don’t contribute to the main problem we want to solve. The survey can be as short as three questions.

2. Use open-ended questions first

To avoid enforcing your own assumptions, use open-ended questions first. Often, we start with a few checkboxes or lists, which can be intimidating for survey respondents. An open-ended question feels more inviting and warmer – it makes people feel like you want to hear what they want to say and actually start a conversation. Open-ended questions give you more insightful answers, however, closed questions are easier to respond to, easier to analyze,  but they  do not create rich insights.

The best approach is to use a mix of both types of questions, as It’s more compelling to answer different types of questions for respondents.

3. Use surveys as a way to present solutions

Your surveys will reveal what areas in your business need extra support or what creates bottlenecks in your service. Use your surveys as a way of presenting solutions to your audience and getting direct  feedback  on those solutions in a more consultative way.

4. Consider your timing

It’s important to think about the timing of your survey. Take into account when your audience is most likely to respond to your survey and give them the opportunity to do it at their leisure, at the time that suits them.

5. Challenge your assumptions

It’s crucial to challenge your assumptions, as it’s very tempting to make assumptions about why things are the way they are. There is usually more than meets the eye about a person’s preferences and background which can affect the scenario.

6. Have multiple survey-writers

To have multiple survey writer can be helpful, as having people read each other’s work and test the questions helps address the fact that most questions can be interpreted in more than one way.

7. Choose your survey questions carefully

When you’re choosing your survey questions, make it really count. Only use those that can make a difference to your end outcomes.

8. Be prepared to report back results and take action

As a respondent you want to know your responses count, are reviewed and are making a difference. As an incentive, you can share the results with the participants, in the form of a benchmark, or a measurement that you then report to the participants.

9. What’s in it for them?

Always think about what customers (or survey respondents) want and what’s in it for them. Many businesses don’t actually think about this when they send out their surveys.

If you can nail the “what’s in it for me”, you automatically solve many of the possible issues for the survey, such as whether the respondents have enough incentive or not, or if the survey is consistent enough.

For a good survey design, always ask:

  •      What insight am I hoping to get from this question?
  •      Is it likely to provide useful answers?

For more pointers on how to design your survey for success, check out our blog on  4 Steps to Customer Survey Design – Everything You Need to Know .

research paper survey results

Agi loves writing! She enjoys breaking down complex topics into clear messages that help others. She speaks four languages fluently and has lived in six different countries.

We make it easy to discover the customer and product issues that matter.

Unlock the value of feedback at scale, in one platform. Try it for free now!

  • Questions to ask your Feedback Analytics vendor
  • How to end customer churn for good
  • Scalable analysis of NPS verbatims
  • 5 Text analytics approaches
  • How to calculate the ROI of CX

Our experts will show you how Thematic works, how to discover pain points and track the ROI of decisions. To access your free trial, book a personal demo today.

Recent posts

Become a qualitative theming pro! Creating a perfect code frame is hard, but thematic analysis software makes the process much easier.

When two major storms wreaked havoc on Auckland and Watercare’s infrastructurem the utility went through a CX crisis. With a massive influx of calls to their support center, Thematic helped them get inisghts from this data to forge a new approach to restore services and satisfaction levels.

Everyone says they want customers to be satisfied, but what are you actually doing to make customers happy? How do you know if you’re on the right track? How do you know if your customer satisfaction efforts make a difference? Why even aim for customer satisfaction at all? We

  • Utility Menu

University Logo

Harvard University Program on Survey Research

  • How to Frame and Explain the Survey Data Used in a Thesis

Surveys are a special research tool with strengths, weaknesses, and a language all of their own. There are many different steps to designing and conducting a survey, and survey researchers have specific ways of describing what they do.

This handout, based on an annual workshop offered by the Program on Survey Research at Harvard, is geared toward undergraduate honors thesis writers using survey data.

74 KB

PSR Resources

  • Managing and Manipulating Survey Data: A Beginners Guide
  • Finding and Hiring Survey Contractors
  • Overview of Cognitive Testing and Questionnaire Evaluation
  • Questionnaire Design Tip Sheet
  • Sampling, Coverage, and Nonresponse Tip Sheet
  • Introduction to Surveys for Honors Thesis Writers
  • PSR Introduction to the Survey Process
  • Related Centers/Programs at Harvard
  • General Survey Reference
  • Institutional Review Boards
  • Select Funding Opportunities
  • Survey Analysis Software
  • Professional Standards
  • Professional Organizations
  • Major Public Polls
  • Survey Data Collections
  • Major Longitudinal Surveys
  • Other Links

How to Analyze Survey Results Like a Data Pro

Swetha Amaresan

Updated: November 23, 2021

Published: October 04, 2021

Obtaining customer feedback is difficult. You need strong survey questions that effectively derive customer insights. Not to mention a distribution system that shares the survey with the right customers at the right time. However, survey data doesn't just sort and analyze itself. You need a team dedicated to sifting through survey results and highlighting key trends and behaviors for your marketing, sales, and customer service teams. In this post, we'll discuss not only how to analyze survey results, but also how to present your findings to the rest of your organization.

survey-results

Short on time? Jump to the topics that interest you most:

How to Analyze Survey Results

How to present survey results, how to write a survey report, survey report template examples, 1. understand the four measurement levels..

Before analyzing data, you should understand the four levels of measurement. These levels determine how survey questions should be measured and what statistical analysis should be performed. The four measurement levels are nominal scales, ordinal scales, interval scales, and ratio scales.

Nominal Scale

Nominal scales classify data without any quantitative value, similar to labels. An example of a nominal scale is, "Select your car's brand from the list below." The choices have no relationship to each other. Due to the lack of numerical significance, you can only keep track of how many respondents chose each option and which option was selected the most.

research paper survey results

Free Market Research Kit

5 Research and Planning Templates + a Free Guide on How to Use Them in Your Market Research

  • SWOT Analysis Template
  • Survey Template
  • Focus Group Template

Download Free

All fields are required.

You're all set!

Click this link to access this resource at any time.

Ordinal Scale

Ordinal scales are used to depict the order of values. For this scale, there's a quantitative value because one rank is higher than another. An example of an ordinal scale is, "Rank the reasons for using your laptop." You can analyze both mode and median from this type of scale, and ordinal scales can be analyzed through cross-tabulation analysis .

Interval Scale

Interval scales depict both the order and difference between values. These scales have quantitative value because data intervals remain equivalent along the scale, but there's no true zero point. An example of an interval scale is in an IQ test. You can analyze mode, median, and mean from this type of scale and analyze the data through ANOVA , t-tests , and correlation analyses . ANOVA tests the significance of survey results, while t-tests and correlation analyses determine if datasets are related.

Ratio Scale

Ratio scales depict the order and difference between values, but unlike interval scales, they do have a true zero point. With ratio scales, there's quantitative value because the absence of an attribute can still provide information. For example, a ratio scale could be, "Select the average amount of money you spend online shopping." You can analyze mode, median, and mean with this type of scale and ratio scales can be analyzed through t-tests, ANOVA, and correlation analyses as well.

2. Select your survey question(s).

Once you understand how survey questions are analyzed, you should take note of the overarching survey question(s) that you're trying to solve. Perhaps, it's "How do respondents rate our brand?"

Then, look at survey questions that answer this research question, such as "How likely are you to recommend our brand to others?" Segmenting your survey questions will isolate data that are relevant to your goals.

Additionally, it's important to ask both close-ended and open-ended questions.

Close-Ended Questions

A close-ended survey question gives a limited set of answers. Respondents can't explain their answer and they can only choose from pre-determined options. These questions could be yes or no, multiple-choice, checkboxes, dropdown, or a scale question. Asking a variety of questions is important to get the best data.

Open-Ended Questions

An open-ended survey question will ask the respondent to explain their opinion. For example, in an NPS survey, you'll ask how likely a customer is to recommend your brand. After that, you might consider asking customers to explain their choice. This could be something like "Why or why wouldn't you recommend our product to your friends/family?"

3. Analyze quantitative data first.

Quantitative data is valuable because it uses statistics to draw conclusions. While qualitative data can bring more interesting insights about a topic, this information is subjective, making it harder to analyze. Quantitative data, however, comes from close-ended questions which can be converted into a numeric value. Once data is quantified, it's much easier to compare results and identify trends in customer behavior .

It's best to start with quantitative data when performing a survey analysis. That's because quantitative data can help you better understand your qualitative data. For example, if 60% of customers say they're unhappy with your product, you can focus your attention on negative reviews about user experience. This can help you identify roadblocks in the customer journey and correct any pain points that are causing churn.

4. Use cross-tabulation to better understand your target audience.

If you analyze all of your responses in one group, it isn't entirely effective for gaining accurate information. Respondents who aren't your ideal customers can overrun your data and skew survey results. Instead, if segment responses using cross-tabulation, you can analyze how your target audience responded to your questions.

Split Up Data by Demographics

Cross-tabulation records the relationships between variables. It compares two sets of data within one chart. This reveals specific insights based on your participants' responses to different questions. For example, you may be curious about customer advocacy among your customers based in Boston, MA. You can use cross-tabulation to see how many respondents said they were from Boston and said they would recommend your brand.

By pulling multiple variables into one chart, we can narrow down survey results to a specific group of responses. That way, you know your data is only considering your target audience.

Below is an example of a cross-tabulation chart. It records respondents' favorite baseball teams and what city they reside in.

survey analysis cross tabulation

If the statistical significance or p-value for a data point is equal to or lower than 0.05, it has moderate statistical significance since the probability for error is less than 5%. If the p-value is lower than 0.01, that means it has high statistical significance because the probability for error is less than 1%.

6. Consider causation versus correlation.

Another important aspect of survey analysis is knowing whether the conclusions you're drawing are accurate. For instance, let's say we observed a correlation between ice cream sales and car thefts in Boston. Over a month, as ice cream sales increased so did reports of stolen cars. While this data may suggest a link between these variables, we know that there's probably no relationship.

Just because the two are correlated doesn't mean one causes the other. In cases like these, there's typically a third variable — the independent variable — that influences the two dependent variables. In this case, it's temperature. As the temperature increases, more people buy ice cream. Additionally, more people leave their homes and go out, which leads to more opportunities for crime.

While this is an extreme example, you never want to draw a conclusion that's inaccurate or insufficient. Analyze all the data before assuming what influences a customer to think, feel, or act a certain way.

7. Compare new data with past data.

While current data is good for keeping you updated, it should be compared to data you've collected in the past. If you know 33% of respondents said they would recommend your brand, is that better or worse than last year? How about last quarter?

If this is your first year analyzing data, make these results the benchmark for your next analysis. Compare future results to this record and track changes over quarters, months, years, or whatever interval you prefer. You can even track data for specific subgroups to see if their experiences improve with your initiatives.

Now that you've gathered and analyzed all of your data, the next step is to share it with coworkers, customers, and other stakeholders. However, presentation is key in helping others understand the insights you're trying to explain.

The next section will explain how to present your survey results and share important customer data with the rest of your organization.

1. Use a graph or chart.

Graphs and charts are visually appealing ways to share data. Not only are the colors and patterns easy on the eyes, but data is often easier to understand when shared through a visual medium. However, it's important to choose a graph that highlights your results in a relevant way.

how to present survey results: use a graph or chart

2. Minimal Formal Annual Report

This Canva report template lets the data speak for itself. The minimal portrait layout offers plenty of negative space around the content so that it can breathe. Bold numbers and percentages can remain or be omitted depending on the needs you have for each page. One of the rare gems of this template is its ability to balance large, clear images that don't crowd out the important written information on the page. Use this template for hybrid text-visual designs.

survey report template example from canva minimal formal annual report

4. Empowerment Keynote Presentation

This presentation template makes a great research report template due to its clean lines, contrasting graphic elements, and ample room for visuals. The headers in this template virtually jump off the page to grab the readers' attention. There's aren't many ways to present quantitative data using this template example, but it works well for qualitative survey reports like focus groups or product design studies where original images will be discussed.

survey report template example from canva empowerment keynote presentation

Don't forget to share this post!

Related articles.

Nonresponse Bias: What to Avoid When Creating Surveys

Nonresponse Bias: What to Avoid When Creating Surveys

How to Make a Survey with a QR Code

How to Make a Survey with a QR Code

50 Catchy Referral Slogans & How to Write Your Own

50 Catchy Referral Slogans & How to Write Your Own

How Automated Phone Surveys Work [+Tips and Examples]

How Automated Phone Surveys Work [+Tips and Examples]

Online Panels: What They Are & How to Use Them Effectively

Online Panels: What They Are & How to Use Them Effectively

The Complete Guide to Survey Logic (+Expert Tips)

The Complete Guide to Survey Logic (+Expert Tips)

Focus Group vs. Survey: Which One Should You Use?

Focus Group vs. Survey: Which One Should You Use?

Leading Questions: What They Are & Why They Matter [+ Examples]

Leading Questions: What They Are & Why They Matter [+ Examples]

What are Survey Sample Sizes & How to Find Your Sample Size

What are Survey Sample Sizes & How to Find Your Sample Size

28 Questionnaire Examples, Questions, & Templates to Survey Your Clients

28 Questionnaire Examples, Questions, & Templates to Survey Your Clients

Free Guide & Templates to Help Your Market Research

Service Hub provides everything you need to delight and retain customers while supporting the success of your whole front office

We use essential cookies to make Venngage work. By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts.

Manage Cookies

Cookies and similar technologies collect certain information about how you’re using our website. Some of them are essential, and without them you wouldn’t be able to use Venngage. But others are optional, and you get to choose whether we use them or not.

Strictly Necessary Cookies

These cookies are always on, as they’re essential for making Venngage work, and making it safe. Without these cookies, services you’ve asked for can’t be provided.

Show cookie providers

  • Google Login

Functionality Cookies

These cookies help us provide enhanced functionality and personalisation, and remember your settings. They may be set by us or by third party providers.

Performance Cookies

These cookies help us analyze how many people are using Venngage, where they come from and how they're using it. If you opt out of these cookies, we can’t get feedback to make Venngage better for you and all our users.

  • Google Analytics

Targeting Cookies

These cookies are set by our advertising partners to track your activity and show you relevant Venngage ads on other sites as you browse the internet.

  • Google Tag Manager
  • Infographics
  • Daily Infographics
  • Popular Templates
  • Accessibility
  • Graphic Design
  • Graphs and Charts
  • Data Visualization
  • Human Resources
  • Beginner Guides

Learn to Communicate with Data

How to present survey results using infographics.

How to Present Survey Results Using Infographics

How can you present survey data in a way that won’t bore your audience to tears?

Well, we all know that unique visuals like infographics can make charts and graphs more engaging. Survey data is easily translated into graphs and charts, making survey results and infographics the perfect marriage!

So without further ado, let’s get into everything you need to know to make a survey results infographic .

First up, let's kick things off by checking out some survey results templates that match up with different types of data. After that, I'll guide you through creating eye-catching survey results infographics, spicing up your results with some handy tips.

CREATE A FREE SURVEY RESULTS INFOGRAPHIC  

Click to jump ahead:

How to present survey results

  • 3 types of survey results infographics

5 best practices for presenting survey results in infographics

Visualizing survey data effectively means using different types of charts for different types of survey results (i.e. binary, rating scale, multiple choice, single choice, or demographic results).

1. Binary results

If your survey questions offer two binary options (for example, “yes” and “no”), a pie chart is the simplest go-to option.

Using pies for binary results is pretty self-explanatory. Basically, just use a single pie slice to highlight the proportion of “Yes” responses compared to “No” responses. For the “Yes” responses, use a brighter, more saturated color and start the segment at 12 o’clock on the pie chart:

research paper survey results

EDIT THIS SURVEY RESULTS TEMPLATE   

If you want to compare the response rates of multiple groups, skip the pies and go for a single bar chart. A bunch of aligned bars are much easier to compare than multiple pie charts. Don’t forget to label each bar with its percentage for clarity:

research paper survey results

For a fun alternative that’s less information-dense, you can split up the bars to make a sort of modified 100% stacked bar chart. This frees up some space to add better labels for both the “Yes” responses and the “No” responses.

research paper survey results

Or, forget about the extra notes and let the data speak for itself. Use a standard 100% stacked bar chart, color-coded to contrast the different responses, and sorted for readability.

research paper survey results

2. Rating scale results

In a rating scale question, survey takers are offered a spectrum of possible answers and are asked to select an answer along that spectrum.

This type of question is often found on customer satisfaction surveys , used to gain an understanding of customer sentiment about a product or service. It's also popular for post event surveys , to gage how much people enjoyed the event. 

Most commonly it comes in one of two forms: the Likert scale (“Strongly Disagree,” “Disagree,” Neutral, “Agree” and “Strongly Agree”) or the Net Promoter Score (NPS, ranging from 0 to 10). The NPS is used to judge the willingness of a customer to recommend a product or service to others.

The 100% stacked bar chart is the simplest option for visualizing survey data from rating scale questions. It’s quick to make, and presents the proportion of responses in each category quite clearly.

survey results

With either of these scales, it’s helpful to summarize the results into coarser categories. Take the five- and ten-point Likert and NPS scales and summarize them into simpler three-point scales (“disagree”, “neutral”, and “agree” or “positive”, “neutral”, and “negative”).

survey results

Presenting survey results in a simplified categories goes a long way in making the chart easier to read.

3. Demographic results

If your survey gathers information about the respondents’ demographics in addition to other survey results, you may want to use that data as part of your analysis. Including factors like age, gender, income level, and even geographic location can make for an interesting infographic.

Visualizing survey data on a map is a fun way to include a demographic component in your infographic. A chloropleth map, like you see below, can be used to show the distribution of some data by geographic location . Different values are represented by different shades of a given color, so no reading is required:

survey results

Histograms, on the the other hand, can be used to show the age distribution of a particular population. They can easily incorporate data on gender, too:

survey results

While these specialized survey charts are great for more complex data, they won’t always be necessary. Consider using an icon chart when you want to make a simpler type of demographic data, like job or role, a feature of your design. They’re a fun way to add more impact to simple results.

research paper survey results

4. Open-ended comments

Open-ended questions (questions that require respondents to write out their own answer, rather than selecting a preset answer) present a bit of a challenge. In order to visualize them, the answers need to be grouped in some way, either through common keywords, sentiments or some other factor.

Word clouds, though frowned upon by some data visualization experts, can be a quick way to get summary of this type of qualitative data.

They’re great for audiences who don’t have experience with data-heavy tables or statistical analysis , and they’re easy to make. Just pick out the most frequently-used keywords from the comments and plug them into our word cloud generator.

research paper survey results

Otherwise you’ll have to do a more intensive manual qualitative analysis. Go through the open-ended responses and create categories.

Once you’ve quantified your answers, you’ll be able to present the results in a bar chart like this one, which shows the percent of comments that fall into each category.

survey results

5. Multiple choice results

Multiple choice questions allow respondents to select one or more answers from a list of possible answers.

The best visual for this kind of survey is a simple bar chart.

For the questions that allow respondents to make more than one selection, you’ll need to calculate the percentage of people who chose each answer, like you see in this chart from CoSchedule :

survey results

As always, bars should be sorted from greatest to least.

Pie charts are a decent option for times when respondents can only select a single answer. Keep in mind, though, that they’re not ideal if you’ve got a lot of data. If you have more than a few different responses to show, try giving each one its own chart:

research paper survey results

3 Types of survey results infographics

Now that we’ve covered the best chart types for each type of survey result, let’s get into how we might combine survey charts to make a complete infographic.

A survey results infographic should use a combination of charts, graphic elements, and annotations tell a story.

Single-column summary infographics

The most popular type of survey results infographic is the single-column summary infographic. It sums up all of the major takeaways of a survey, explicitly stating the most important insights.

It might show the results of every survey question simply, using a large, bold number or basic chart for each question:

research paper survey results

Or it might present a comprehensive overview of the data, with a more detailed, annotated chart for each survey question:

research paper survey results

It might add some extra commentary after each question, too.

Either way, it presents the questions sequentially, in a single column, so that viewers can scroll through to read the results like a story.

To make your own single-column summary infographic, simply start at the top with the first question, and work your way down until you’ve covered each of the major survey insights. State each question, add the results in the form of a chart, and add notes about any interesting learnings.

research paper survey results

To add some visual organization to a single-column infographic, use different background colors to create distinctions between sections. Add colored blocks behind each question to divide up the content.

Like you can see in the Netflix survey above, alternating red and black background colors adds a pleasing sense of rhythm and makes the infographic easier to scan.

Letter-sized summary infographics

If your survey is only a few questions long, a big single-column infographic is probably overkill. It might be better to stick with a basic 8.5”x11” page, and make it all about the numbers.

Forget about adding lots of notes, comments, and annotations. Just state each question in the simplest possible terms (i.e., “Where users are located”), and use simple survey charts to sum up the results.

research paper survey results

Make sure you organize the charts based on an underlying grid , or you might end up with a jumbled mess.

Or you can even forget about charts altogether, and present the key takeaways as simply as possible. Use big, bold numbers to make a statement:

research paper survey results

Letter-sized feature infographics

The last go-to option for presenting survey results is the one-page feature infographic. It couldn’t be more simple. It breaks down the results of a single survey question, in a single chart, on a single page.

We like to call this the “power stat” infographic. It combines a very simple chart with some big, bold text for a high-impact result:

research paper survey results

Even if you have the most interesting survey data ever, no one will give it a second look if your infographic is poorly designed. Keep these best practices in mind when you make your next survey results infographic.

Clearly label charts to provide context and prevent misinterpretation

Your readers should be able to understand your survey charts in only a few seconds’ glance. Don't go for double barrelled questions when it comes to creating them. And if you ask me, that makes chart labels the most important chart elements (after the data itself, of course).

Descriptive labels can be used to add context to the data--to spell out the conclusions and implications of the data in the chart. This extra text will help to ensure that nothing is misinterpreted or lost in translation between you and your audience.

A well-labelled chart looks something like this:

Romantic Partner Personality Survey

The labels stand out against the background of the chart, with arrows clearly tying them to their respective data points.

Simplify the data to create clarity

It can be tempting to include every single data point in a visualization, but that won’t do you any good!

Be selective with your data. Just because you have a lot of data doesn’t mean your audience will want to spend hours scrolling through a mile-long infographic.

Select the most important results, and leave the rest for more in-depth summaries like white papers or reports . Include some supporting data if you need to, but remember--data visualization is all about cutting through the clutter .

Don’t embellish your infographic with unnecessary decorations

Along the same lines, avoid adding unnecessary icons, hard-to-read fonts, gaudy colors, 3D effects, or any other forms of “chartjunk”--ornamental elements that don’t help clarify anything about the data itself.

While you might think that adding extra elements will make your infographic more appealing,  they often only distract from the information you want to communicate.

survey results

The focus of your infographic should be A) the charts and B) your notes, labels, and annotations.

Apply style choices uniformly throughout the infographic

Regardless of what colors, fonts, images, or icons you use, be sure to apply styling consistently throughout the graphic.

Notice how color is used consistently (to represent the same response) in each section of this infographic?

research paper survey results

That makes comparing responses across populations painless.

Include links to data sources in the infographic footer

Cite your data sources, ideally in link form, in the footer of your infographic. Make it easy for the more curious members of your audience to find and peruse the original data for themselves.

Even if it’s your own original research, linking to the complete data will help your credibility and allow readers to make their own decisions about the data. And who knows--maybe they’ll find something interesting that you missed the first time around!

Sometimes tables and graphs alone just don’t cut it.

While an in-depth analysis of survey results is best presented in a comprehensive report, an infographic is an excellent medium for summarizing your findings for more immediate impact.

Now that you know how to present survey results with the right charts, the infographic design process should be painless. If you get stuck, check out this roundup of our most popular survey results templates .

Or get started right away:

GET STARTED FOR FREE  

  • Survey analysis
  • Survey best practices
  • Tips & tricks
  • 5 Examples of How to Present Survey ...

5 Examples of How to Present Survey Results to Stakeholders

5 Examples of How to Present Survey Results to Stakeholders

When you’ve lovingly designed, built, and distributed your survey and responses start flooding in, it’s time to begin the process of sorting and analyzing the data you’ll be presenting to stakeholders.

Once you’ve weeded the unusable responses, begin recording relevant responses through your survey platform or in a spreadsheet. If you use survey software like CheckMarket , you can easily transfer data into visuals with pre-built reports and dashboards.

Decide your data groups. Was the survey just answering one over-arching question? Or do you have multiple areas covered? Represent each data group separately.

For each result, provide additional information such as why you conducted the survey, what questions you were trying to answer, how the results help businesses, and any surprising answers.

When you have the data separated, the next step is to identify and prioritize the information your stakeholders will most want to see.

Choosing the Right Data to Share

First things first: who is your audience? Is it your boss? Is it your peers? Is it your direct clients or customers? The information that clients want to see, for instance, may be completely different to what your boss is interested in. The information you choose to share will vary drastically depending on the campaign you’re working on.

For example, if you’re working on a new marketing campaign, your audience may be interested in how you plan on advertising your business and what perks that may bring them.

However, when it comes to your stakeholders, they will be less interested in the customer perks, and more interested in how this new campaign will work for the business. They might want to know:

  • How is it going to grow your audience?
  • How will it turn them from leads to paying customers?
  • How can this help improve your business’s bottom line?

When you’re presenting results, clearly define the purpose of the survey and why it matters to your stakeholders. Your story should be specific and concise.

Raise vital questions early on and have the answers ready to go. Your stakeholders have a limited amount of time to listen to what you have to say – make sure you are making the most of it.

This means you’ll have to pick and choose your data results carefully. All results need to be relevant and essential. Your stakeholders will be interested in information that makes a difference. And you’ll want the answers to be presented in the easiest way possible – which is why you want to choose your display method carefully.

research paper survey results

5 Ways to Display Your Survey Results

When you present results, you are looking to be clear, simple, and memorable. So, viewers should not have to ask you to explain your results.

Here are five common ways to present your survey results to businesses, stakeholders, and customers.

1. Graphs and Charts

Graphs and charts summarize survey results in a quick, easy graphic for people to understand. Some of the most common types of graphs include:

  • Bar graphs are the most popular way to display results. Easily create, customize, and show results. Most people also know how to read a basic bar graph to interpret survey results.
  • Line graphs show how results change over time by tracking the ups and downs of the data.
  • Pie charts show the breakup of a whole into sections. For example, your whole could be the total number of respondents, and the sections represent percentages that answered a certain way.
  • Venn diagrams show the interaction between respondents and their answers. For example, overlapping circles could show the differences and similarities in responses between parents who use a product versus non-parents who use a product.

When creating a chart or graph, make the findings clear to read. Avoid too many intersecting lines and text options. If you can’t fit all the information into one graph, create several graphs rather than making one complex chart. Using colors to differentiate groups is another way to make results easy to read.

2. Infographics

Infographics add a creative twist to otherwise bland charts and graphs. A good infographic will use images to enhance the message, not distract from the data.

One survey results presentation example is to use silhouettes of people to convey a percentage of the population instead of a bar graph. This image helps those who see it connect the statistic to real people.

A word cloud is a powerful way to display open-ended question responses graphically. As more people respond with a specific word, that word will appear in the cloud – emphasizing the most relevant answers.

research paper survey results

3. Video and Animations

People spend over 100 minutes a day watching videos – which is why marketers have tapped into this strategic area for reaching an audience. Nearly 88% of marketers say video marketing yields a strong return.

A video is a powerful tool for presenting information, including the results of your survey. You can capture your audience’s attention with motion, sound, and colorful statistics to help them remember information and react accordingly.

If you present findings through video, be aware that sharing options will be limited to platforms that can play a video – such a blog posts, websites, and PowerPoint presentations. Also, creating a PDF of the findings for people to look over at their leisure is a helpful way to support a video presentation.

4. Spreadsheets

Spreadsheets like Excel are not visually appealing, but they work well for organizing large amounts of information to create a survey results report.

While an image or video works best on websites, sometimes you may need to add more information than can fit in one picture.

Suppose you wanted to provide stakeholders or business partners with a detailed look at the survey and all the responses. A spreadsheet will allow the freedom to display all the necessary information at once. You can still use attractive infographics to summarize the findings and a video to present the report along with the spreadsheet.

5. Interactive Clickable Results

Interactive results are a fun way to allow viewers to explore results. You can also organize the findings to help break up large amounts of information.

Interactive maps are a common way to display survey results graphically. For example, results can be viewed by region when they click on a specific map area. Interactive maps and displays work best for websites and blogs.

An infographic that summarizes all the data as a global average allows people who don’t have the time to explore the map to see the information.

Customize Your Results in One Place

Time is precious in the marketing industry. You don’t want to spend days analyzing and sorting through survey results.

And you don’t have to.

By using CheckMarket, you can create, gather, and present survey results with one easy-to-use platform.

  • Survey best practices (63)
  • Market Research (62)
  • Tips & tricks (52)
  • Product updates (40)
  • Company news (22)
  • Customer Experience (19)
  • Net Promoter Score (16)
  • Employee Experience (16)
  • Survey analysis (9)

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

  • Skip to Guides Search
  • Skip to breadcrumb
  • Skip to main content
  • Skip to footer
  • Skip to chat link
  • Report accessibility issues and get help
  • Go to Penn Libraries Home
  • Go to Franklin catalog

CWP: Craft of Prose: Researching the White Paper

  • Getting started
  • News and Opinion Sites
  • Academic Sources
  • Grey Literature
  • Substantive News Sources
  • What to Do When You Are Stuck
  • Understanding a citation
  • Examples of Quotation
  • Examples of Paraphrase
  • Chicago Manual of Style: Citing Images
  • Researching the Op-Ed
  • Researching Prospective Employers
  • Resume Resources
  • Cover Letter Resources

Research the White Paper

Researching the white paper:.

The process of researching and composing a white paper shares some similarities with the kind of research and writing one does for a high school or college research paper. What’s important for writers of white papers to grasp, however, is how much this genre differs from a research paper.  First, the author of a white paper already recognizes that there is a problem to be solved, a decision to be made, and the job of the author is to provide readers with substantive information to help them make some kind of decision--which may include a decision to do more research because major gaps remain. 

Thus, a white paper author would not “brainstorm” a topic. Instead, the white paper author would get busy figuring out how the problem is defined by those who are experiencing it as a problem. Typically that research begins in popular culture--social media, surveys, interviews, newspapers. Once the author has a handle on how the problem is being defined and experienced, its history and its impact, what people in the trenches believe might be the best or worst ways of addressing it, the author then will turn to academic scholarship as well as “grey” literature (more about that later).  Unlike a school research paper, the author does not set out to argue for or against a particular position, and then devote the majority of effort to finding sources to support the selected position.  Instead, the author sets out in good faith to do as much fact-finding as possible, and thus research is likely to present multiple, conflicting, and overlapping perspectives. When people research out of a genuine desire to understand and solve a problem, they listen to every source that may offer helpful information. They will thus have to do much more analysis, synthesis, and sorting of that information, which will often not fall neatly into a “pro” or “con” camp:  Solution A may, for example, solve one part of the problem but exacerbate another part of the problem. Solution C may sound like what everyone wants, but what if it’s built on a set of data that have been criticized by another reliable source?  And so it goes. 

For example, if you are trying to write a white paper on the opioid crisis, you may focus on the value of  providing free, sterilized needles--which do indeed reduce disease, and also provide an opportunity for the health care provider distributing them to offer addiction treatment to the user. However, the free needles are sometimes discarded on the ground, posing a danger to others; or they may be shared; or they may encourage more drug usage. All of those things can be true at once; a reader will want to know about all of these considerations in order to make an informed decision. That is the challenging job of the white paper author.     
 The research you do for your white paper will require that you identify a specific problem, seek popular culture sources to help define the problem, its history, its significance and impact for people affected by it.  You will then delve into academic and grey literature to learn about the way scholars and others with professional expertise answer these same questions. In this way, you will create creating a layered, complex portrait that provides readers with a substantive exploration useful for deliberating and decision-making. You will also likely need to find or create images, including tables, figures, illustrations or photographs, and you will document all of your sources. 

Latin American Studies Librarian

Profile Photo

Connect to a Librarian Live Chat or "Ask a Question"

  • Librarians staff live chat from 9-5 Monday through Friday . You can also text to chat: 215-543-7674
  • You can submit a question 24 hours a day and we aim to respond within 24 hours 
  • You can click the "Schedule Appointment" button above in librarian's profile box (to the left), to schedule a consultation with her in person or by video conference.  
  • You can also make an appointment with a  Librarian by subject specialization . 
  • Connect by email with a subject librarian

Find more easy contacts at our Quick Start Guide

  • Next: Getting started >>
  • Last Updated: Aug 26, 2024 1:21 PM
  • URL: https://guides.library.upenn.edu/c.php?g=1419866

This paper is in the following e-collection/theme issue:

Published on 29.8.2024 in Vol 13 (2024)

Optimizing Response Rates to Examine Health IT Maturity and Nurse Practitioner Care Environments in US Nursing Homes: Mixed Mode Survey Recruitment Protocol

Authors of this article:

Author Orcid Image

  • Gregory L Alexander 1 * , RN, PhD   ; 
  • Lusine Poghosyan 1 * , RN, MPH, PhD   ; 
  • Yihong Zhao 1 * , PhD   ; 
  • Mollie Hobensack 2 * , RN, PhD   ; 
  • Sergey Kisselev 1 * , MA   ; 
  • Allison A Norful 1 * , BSN, MSN, PhD, ANP-BC   ; 
  • John McHugh 3 * , MBA, PhD   ; 
  • Keely Wise 1 *   ; 
  • M Brooke Schrimpf 1 * , BA   ; 
  • Ann Kolanowski 4 * , RN, PhD   ; 
  • Tamanna Bhatia 3 * , BS, BA   ; 
  • Sabrina Tasnova 1 * , BA  

1 School of Nursing, Columbia University, New York, NY, United States

2 Icahn School of Medicine Mount Sinai, New York, NY, United States

3 School of Public Health, Columbia University Mailman, New York, NY, United States

4 Pennsylvania State University, University Park, PA, United States

*all authors contributed equally

Corresponding Author:

Gregory L Alexander, RN, PhD

School of Nursing

Columbia University

560 W. 168 Room 628

New York, NY, 10032

United States

Phone: 1 5733013131

Email: [email protected]

Background: Survey-driven research is a reliable method for large-scale data collection. Investigators incorporating mixed-mode survey designs report benefits for survey research including greater engagement, improved survey access, and higher response rate. Mix-mode survey designs combine 2 or more modes for data collection including web, phone, face-to-face, and mail. Types of mixed-mode survey designs include simultaneous (ie, concurrent), sequential, delayed concurrent, and adaptive. This paper describes a research protocol using mixed-mode survey designs to explore health IT (HIT) maturity and care environments reported by administrators and nurse practitioners (NPs), respectively, in US nursing homes (NHs).

Objective: The aim of this study is to describe a research protocol using mixed-mode survey designs in research using 2 survey tools to explore HIT maturity and NP care environments in US NHs.

Methods: We are conducting a national survey of 1400 NH administrators and NPs. Two data sets (ie, Care Compare and IQVIA) were used to identify eligible facilities at random. The protocol incorporates 2 surveys to explore how HIT maturity (survey 1 collected by administrators) impacts care environments where NPs work (survey 2 collected by NPs). Higher HIT maturity collected by administrators indicates greater IT capabilities, use, and integration in resident care, clinical support, and administrative activities. The NP care environment survey measures relationships, independent practice, resource availability, and visibility. The research team conducted 3 iterative focus groups, including 14 clinicians (NP and NH experts) and recruiters from 2 national survey teams experienced with these populations to achieve consensus on which mixed-mode designs to use. During focus groups we identified the pros and cons of using mixed-mode designs in these settings. We determined that 2 mixed-mode designs with regular follow-up calls (Delayed Concurrent Mode and Sequential Mode) is effective for recruiting NH administrators while a concurrent mixed-mode design is best to recruit NPs.

Results: Participant recruitment for the project began in June 2023. As of April 22, 2024, a total of 98 HIT maturity surveys and 81 NP surveys have been returned. Recruitment of NH administrators and NPs is anticipated through July 2025. About 71% of the HIT maturity surveys have been submitted using the electronic link and 23% were submitted after a QR code was sent to the administrator. Approximately 95% of the NP surveys were returned with electronic survey links.

Conclusions: Pros of mixed-mode designs for NH research identified by the team were that delayed concurrent, concurrent, and sequential mixed-mode methods of delivering surveys to potential participants save on recruitment time compared to single mode delivery methods. One disadvantage of single-mode strategies is decreased versatility and adaptability to different organizational capabilities (eg, access to email and firewalls), which could reduce response rates.

International Registered Report Identifier (IRRID): DERR1-10.2196/56170

Introduction

Survey use in clinical informatics research is ubiquitous. Surveys are often used to collect data and measure phenomena such as knowledge of clinical informatics specialties [ 1 ] or the use of electronic health records [ 2 ]. Benefits of using surveys include lower costs to conduct research, better population descriptions, flexibility, and dependability of study designs [ 3 ]. Surveys are used in many professions and across health care settings, including nursing homes, home health care, and hospitals [ 4 - 6 ]. The expansive use of surveys in clinical informatics research calls for a continued focus on training to improve the ability of researchers to design high-quality surveys, develop effective reporting mechanisms, maximize recruitment strategies, and adapt to recruitment challenges needed to enhance the results. Various modes of survey data collection exist across studies. Literature establishing a theoretical foundation for questionnaire response styles used in surveys when collecting data about public opinion indicate that mode of data collection (eg, mixed-modes) is an important stimulus for response [ 7 ]. In this paper, researchers describe a research protocol using mixed-mode survey designs in clinical informatics research using 2 survey tools to explore Health IT (HIT) maturity and nurse practitioner (NP) care environments in US nursing homes (NHs).

In this protocol, HIT maturity is defined in 3 dimensions including HIT capabilities, use, and integration. These HIT maturity dimensions are conceived within NH resident care, clinical support (eg, HIT use in laboratory, pharmacy, and radiology activities), and administrative activities [ 8 ]. The HIT maturity survey tool contains 27 content areas and 183 content items [ 9 ]. The tool will be used to survey NH administrators. The Nurse Practitioner Nursing Home Organizational Climate Questionnaire (NP-NHOCQ), used to measure NP care environments, contains 5 subscales and 41 items. This tool will be used to survey NPs in NHs. The NP-NHOCQ measures the care environment of NPs in NHs in 5 areas: (1) NP-Physician Relations, (2) NP-Administration Relations, (3) NP-Director of Nursing Relations, (4) Independent Practice and Support, and (5) Professional Visibility.

Mixed-Mode Survey Research

Survey-driven research is known as a reliable data collection method to capture individual perspectives on a large scale. However, there are many challenges related to survey-based data collection, such as low response rates and rising costs of human capital [ 10 ]. Previously, researchers have explored the use of mixed-mode survey designs combining methods such as web, phone, face-to-face, and mail administrations. Mixed-mode survey research involves using 2 or more of these modes for data collection [ 11 ]. A survey mode is defined as the communication channel used to collect survey data from one or more respondents [ 11 ]. Prior research has reported the benefits of mixed-mode surveys such as enhancing engagement [ 12 ], mitigating accessibility barriers [ 13 ], and increasing response rates [ 14 ].

Survey modes can be implemented individually or combined with other modes. A single mode approach deploys only one mode at a time. For example, a researcher may use postal mail services as the only method to contact study participants and collect data. Alternatively, mixed-mode designs use multiple modes to recruit respondents (see Figure 1 ). For instance, a simultaneous (also known as concurrent) mixed-mode approach allows respondents to choose their preference between multiple modes deployed at the same time. For instance, a researcher may offer study participants a choice to complete a survey using an electronic PDF version of the questionnaire that can be printed, scanned, and faxed back to researchers or an electronic survey link completed via web. Mixed-modes can also use a sequential approach. In this mode, researchers may offer 2 different modes, one mode at a time, with a second mode coming later, after the first. This mode is particularly useful when following up with participants who do not respond (nonrespondents) to provide alternative survey strategies that better suit their workflows. An example of sequential mode may include contacting participants initially via phone call and then, following no initial response, a second contact is made using a QR code that is sent via a mailed letter. Another mixed-mode useful for following up with nonrespondents is called a delayed concurrent mode. In this mode, participants are offered one mode, then nonrespondents are offered a choice between 2 other modes later during follow-up activities. An example of the delayed concurrent mode might include an initial mailed survey. Then when no response is received, potential participants are sent a choice between a face-to-face or a phone interview to complete the survey. Finally, an adaptive mixed-mode design incorporates different sampling units. In the adaptive modes, 2 different samples are each offered a different mode.

research paper survey results

Mixed-mode survey research has long been identified as a means to improve participation in survey recruitment. For instance, a systematic review of 22 articles among nurses provided evidence that recruitment design strategies that include postal and telephone contacts are generally more successful than fax or web-based approaches [ 16 ]. In a more recent systematic review of 893 studies, mode of administration was a key factor in successful recruitment. However, in this review, electronic and postal modes of survey data collection were less likely to result in higher response rates [ 17 ]. In other research using mixed-modes with clinicians, using a multiple contact protocol generated final response rates 10% points higher than single mode methods [ 18 ].

In this paper, we present mixed-mode methods used in a large survey of NHs in the United States. To achieve research goals, we must have a robust and effective recruitment plan. Therefore, we are using an innovative research protocol using mixed-modes to improve NH administrators’ and NPs’ engagement in survey data collection while increasing the response rates.

Mixed-Mode Survey Research in Nursing Homes

The US health care system has over 15,600 NHs serving over 1.3 million residents [ 19 ]. A growing strategy for improving the outcomes for NH residents is to effectively integrate HIT into care delivery to promote safer care environments for NH residents. HIT integration into NH resident care may improve care environments and by extension, better care quality [ 20 ]. Survey-driven research is a reliable method to capture the perspective of individuals about these phenomena on a large scale. Our team is conducting a national survey of NH administrators and NPs, incorporating 2 different survey tools to explore how HIT maturity (survey 1) impacts care environments (survey 2) where NPs work. A specific aim of this research is to provide comprehensive assessments of HIT maturity and NP care environments in NHs nationally. The goal of the National Institute of Aging funded research study (5R01AG080517, principal investigators: GLA and LP) is to assess differences in HIT maturity and care environments in NHs where NPs deliver care to residents with Alzheimer disease and related dementias and examine their impact on hospitalizations and emergency department visits among residents.

The sample for this study includes randomly selected NHs including administrators (ie, NH leaders responsible for HIT systems in their organization) and NPs from each NH. Our goal is to recruit participants from 1400 NHs in the United States. We use 2 national sources to identify NHs for this study. The first data source is called NH Compare (or Care Compare), a publicly available national data set containing information about organizational characteristics of US NHs and quality of care [ 21 ]. The second data source stems from IQVIA, a company that stores national data about NH location, contact information, and staff including administrators and NPs. In preparation for this proposal, IQVIA provided our team data to identify all US NHs with practicing NPs. According to these data, in 2021, a total of 11,222 unique NPs worked in 5000 NHs for an average of 2.2 NPs/NH. Based on this estimate, we expect to contact 3080 NPs within the 1400 NHs (1400 NHs X 2.2 NPs/NH).

Inclusion and Exclusion Criteria

We use NH Compare files to identify NHs for our study based on 2 specific inclusion criteria. First, we include all NHs located in the United States including Alaska and Hawaii. Second, we include at least 1 NP working in each facility. NPs may include actual employees of a facility or may be employed by an external organization as a consultant for a facility and not directly by the NH. Facilities are not eligible to participate if they meet the following 3 exclusion criteria. First, NHs that do not have an NP employed. Second, NHs with a hospital-based designation as their HIT maturity are likely to be different due to national incentives for HIT adoption in acute care [ 22 , 23 ]. Approximately 6% (n=15,518) of NHs have a health system designation that includes common ownership or joint management [ 24 ]. Third, NHs that are designated as a special focus facility (SFF), which indicates any NH with a history of serious quality issues. NHs with an SFF designation are required to be in a program to stimulate quality-of-care improvements [ 25 ]. In October 2023, Centers for Medicare & Medicaid Services indicated that approximately 0.5% of US NHs have an SFF designation [ 25 ].

The NH Compare website was downloaded in February 2023 to identify facilities for recruitment. We identified 4163 facilities that matched our criteria. In preliminary work, during 2 prior NH survey studies, we achieved approximately a 45% response rate of surveys returned from administrators. Therefore, for the current protocol, we oversampled by randomly selecting 3000 NHs, which we identified by linking the NH Compare and IQVIA data. We included at least 5 facilities in each state, except for Alaska (2 facilities) and Wyoming (3 facilities) which have few NHs with NPs identified. We will recruit all administrators from these 3000 NHs to complete a HIT maturity survey. For every NH that completes the HIT maturity survey, we will recruit all NPs from those facilities.

Sample Characteristics

After we generated the random sample from the merged files, we compared basic characteristics of NHs between the selected NHs and the rest of the NHs nationally. The following NH characteristics were compared to assure that there was limited bias in sample representation:

  • Bed size (<60 beds, 60-120 beds, and >120 beds)
  • Ownership (for profit vs nonprofit)
  • Location (metropolitan, micropolitan, small town, and rural)
  • Staffing hour
  • Medicare vs Medicaid
  • NH overall rating: (ranging from 1 to 5)

Focus Groups to Assess the Pros and Cons of Mixed-Mode Designs

The research team conducted iterative focus groups that included NPs and survey recruitment experts to discuss the pros and cons of different recruitment strategies. To explore the pros and cons, members of the focus groups assessed recruitment strategies used during 2 prior national studies of long-term care NH sites [ 26 ]. The PI and some members of the focus groups lead these national studies that were reviewed. Additionally, members of the focus groups reviewed and discussed potential mixed-mode strategies from the literature to incorporate into this protocol. Schouten [ 15 ] mixed-mode survey research helped inform our decisions for our protocol design.

Data Collection

We aim to survey administrators and NPs using 2 survey tools describing HIT maturity and care environments from each discipline, respectively. To prepare the protocol, the research team conducted 3 iterative focus groups with clinicians (NPs and NH experts), recruiters from 2 national survey teams experienced with recruitment in NHs and with NPs, and a statistician to achieve consensus on which mixed-mode designs to incorporate into this research. Our research protocol workflow is illustrated in Figure 2 . The following sections include descriptions of the mixed-mode workflows by discipline and the surveys being used in this protocol.

research paper survey results

Survey 1: NH Administrator and HIT Maturity

For each randomly selected NH, contact information for NH Administrators has been obtained using IQVIA data set. Our team searched NH websites to confirm contact information of current administrators. During initial contact with each NH administrator (either by phone or a mailed letter), we describe the study’s purpose and explain the study. All administrators who are contacted and agreed to participate in the study will be sent a cover letter providing details about the study’s purpose, instructions on how to complete the NH HIT maturity survey tool, and descriptions of the benefits and risks of participation. We provide a description of the HIT maturity survey for administrators including that the survey measures HIT capabilities, extent of HIT use, and degree of HIT integration in resident care, clinical support, and administrative activities [ 9 ]. We incorporate 2 mixed-mode designs when recruiting NH administrators including a Delayed Concurrent Mode and a Sequential Mode with regular follow-up phone contacts to stimulate engagement.

Delayed Concurrent Mode

Our primary mode for this study is a Delayed Concurrent mixed-mode design. In this mode, administrators are offered the choice between multiple modes. During the first contact (conducted by phone), we describe the project and obtain email addresses for administrators who agree to participate. Then, we follow up with administrators by email with an electronic survey link and a PDF simultaneously. This is important because the different choices among electronic surveys and PDFs allow administrators the flexibility to choose a mode that fits their needs. In nonresponse cases, administrators are later offered a different mode including a postal letter with a QR code that includes a URL link to the survey tool that is subsequently sent at a later time.

Sequential Mode

As a secondary option, we incorporate a Sequential Mode for a minimum of 10% of the facilities in each state. In this mode, participants are offered only 1 mode at a time and only part of the nonrespondents are invited for the second mode. The first mode includes mailing a postal letter that describes the study and provides both a QR code and URL link to the survey for the NH administrator. Recruiters make a series of follow-up calls to administrators after the letter is sent. During follow-up calls, emails are confirmed by the recruitment team. In this sequence, administrators who agree to take the survey and have provided their email addresses but have failed to respond with a completed mailed or faxed survey after a minimum of four follow-up calls are offered a second mode. The second mode includes a URL link and a PDF of the survey that is sent to administrators via email.

Survey 2: NPs and Care Environment

The recruitment team asks administrators to confirm that at least one NP works in their facility (whether employed by the NH or by an external health organization that provides NP services to the facility) and to verify the NP’s name and contact information. NP’s contact information listed in the IQVIA data is confirmed with administrators to ensure that it is current. Contact information that is not current is updated by the recruitment team in the recruitment database. NHs that do not meet the eligibility criterion (eg, NP left and no new NP hired) are excluded. The research team will incorporate a concurrent mixed-mode design to recruit NPs for the study.

Concurrent Mode

NPs are contacted by email or phone by our recruitment team and are provided with information describing the study, its voluntary nature, and confidentiality per the institutional review board’s (IRB’s) protocol. NPs are sent links to both an electronic survey and PDF concurrently. We expect some NHs to have more than one NP complete a survey.

Ethical Considerations

The protocol was approved by the IRB (AAAU3845). Ethical issues that were addressed in our IRB protocol included confidentiality and anonymity of privacy to encourage honest responses. Security and accessibility of the data only to authorized research staff. Researchers also created plans for minimizing coercive behaviors during recruitment (eg, applying pressure) by creating systematic follow-up and templates with recruitment language to use during contacts. The research protocol and all procedures were approved by Columbia University IRB (AAAU3845).

Follow-Up and Engagement

Up to 4 follow-up phone calls are conducted at specified 2-week intervals for administrators who have agreed to participate. Administrators and NPs who do not complete surveys are marked as “No Contact.” Administrators and NPs who complete a survey receive US $25 compensation in the form of a gift card.

Survey Coding and Cleaning

All survey data collection is conducted through REDCap (Research Electronic Data Capture; Vanderbilt University) a web-based software designed for data collection and management in research studies with emphasis on data security and flexibility [ 27 ]. We maintain data about recruitment efforts in REDCap, including number of facilities contacted, persons contacted at each facility, packets or links sent, surveys received, initial cannot reach, contact calls made, follow-up calls made, confirmations received (will complete and not completed), stated completions, and follow-up cannot reach. Recruitment staff, including a project coordinator and 4 research assistants, make recruitment calls and send surveys to NH administrators and NPs.

Data collected via electronic survey are electronically transferred to the REDCap database. Data collected via PDF is manually entered into the REDCap system by our research staff. A meticulous data-cleaning strategy is used before formal statistical analysis to ensure the data quality [ 28 ]. We used algorithms to check questionnaires for consistency and validity. For example, graphical exploration through boxplots, histograms, and scatter plots will be used to help with detecting outliers and logically implausible data points. Any identified outlying observations undergo thorough examination to discern between potential data entry errors and genuinely extreme values. Data entry errors are corrected. Any systematic patterns will be scrutinized. Every step of the data cleaning process and associated decisions are documented to ensure transparency.

There is a possibility that some NH administrators or NPs who agree to participate in the study will not fill out an HIT maturity or care environment survey tool completely. We anticipate that there may be some missing data on completed surveys. Based on prior national HIT maturity and NP studies, we have estimated that less than 3% of the data for surveys received was missing for both types of surveys. We plan to use all available data in our analyses.

Survey Measures

NH HIT Maturity [ 8 , 29 ] is measured using a total composite score that correspond to 7 HIT maturity stages. The 7 maturity stages range from the lowest HIT Maturity Stage 0—Nonexistent HIT solutions or electronic health records to Stage 6—Use of data by residents and resident representatives to generate clinical data and drive self-management. A higher total HIT maturity score indicates greater IT capabilities, use, and integration in resident care, clinical support (including IT systems in pharmacy, radiology, laboratory), and administrative activities in the NH. The overall standardized Cronbach α for this instrument in past research was 0.86 (high); each dimension or domain achieved a Cronbach α ranging from 0.7 to 0.9 [ 30 ].

NP Care Environment is measured by a 44-item Nurse Practitioner Nursing Home Organizational Climate Survey (NP-NHOCS) [ 31 ], which asks NPs to rate the work attributes in NHs using a 5-point Likert scale. The NP-NHOCS has five subscales: (1) NP-Physician Relations (7 items)—measures the relationship, communication, and teamwork between NPs and physicians; (2) NP-Administration Relations (11 items)—measures collaboration and communication between NPs and managers; (3) NP-Director of Nursing Relations (8 items)—measures the relationship, communication, and teamwork between NPs and Directors of Nursing; (4) Independent Practice and Support (9 items)—measures resources and support NPs have for their independent practice; and (5) Professional Visibility (9 items)—measures how visible the NP role is in the organization. We first compute NP-level and then NH-level mean scores by aggregating the responses of all NPs in the NHs as recommended [ 32 ]. Higher mean scores indicate better care environments. NPs are asked to complete measures of demographics (eg, age, sex, and experience).

A number of planned analyses will be performed. In terms of HIT maturity survey, we aim to understand which survey mode (Delayed Concurrent vs Sequential) will maximize NHs’ engagement in our research project and which factor(s) influence survey completion method. First, descriptive statistics will be used to summarize the key variables of interest including but not limited to response rates (agreeing to participate or not), completion rates, time taken to complete the survey, and the proportion of electronic surveys received. Chi-square test or Fisher exact test will be used to examine differences in response rates, completion rates, proportion of electronic survey received between the NHs assigned to the Delayed Concurrent Mode and Sequential Mode survey designs. This analysis will determine if one survey mode yields higher response and completion rates compared to the other. Second, if there is sufficient data availability, linear regression models will be used to test whether NH administrators’ demographic characteristics (ie, age, sex, race or ethnicity), NH-level characteristics (eg, bed size and staffing hours), and HIT maturity level are associated with the choice of survey completion method (electronic or PDF format).

In terms of NP care environment survey, all NPs will be offered both an electronic survey and a PDF concurrently. The proportion of electronic surveys received among those who respond will be calculated to determine preference for electronic over PDF surveys. If a sufficient number of electronic and PDF surveys are received, linear mixed effects models with NH as random effect will be used to assess whether the choice of survey completion method is associated with NH-level characteristics (eg, HIT maturity score, geographical location, ownership), NP-level characteristics (eg, age, race or ethnicity, years of experience, and job roles), and NP care environment scores, respectively.

The research team conducted 3 iterative focus groups with a total of 14 clinicians including NPs and survey recruitment experts. The following pros and cons were used to determine our recruitment strategies.

Pros of Mixed-Mode Designs

The pros of mixed-mode designs identified by the team during focus groups were that delayed concurrent, concurrent, and sequential mixed-mode approaches can save recruitment time compared to single mode delivery methods. Additionally, effort on the part of recruitment staff is minimized by using mixed-modes. By using mixed survey modes, participants can immediately choose their preferred survey method, potentially enhancing their satisfaction with the survey process. This facilitates engagement that leads to completed surveys and increased response rates. Another pro of the concurrent mode identified was that sending a QR code via the postal service in addition to providing a URL link provides greater selectivity and plasticity in a respondent’s choice, which could enhance engagement and responsiveness to surveys. A pro of single mode designs is the potential for quick turnaround times and representative samples for projects with limited resources [ 33 ].

Cons of Mixed-Mode Designs

One disadvantage of single mode strategies is that they decrease the versatility and adaptability to different organizational capabilities (eg, access to email and system firewalls), which could reduce response rates. For example, a URL link sent via email might be more difficult for NH administrators and NPs to open due to system firewalls put into place by organizations to meet higher level security standards of HIT systems. We identified another con of a sequential mode; for instance, if a singular mode is offered when initial recruitment is started, the respondent may not engage in the second wave. For example, if respondents are concerned about access to email, they may not engage with us again in further calls if the first mode offered including email is perceived as a barrier to participation. Other cons that were identified related to NH infrastructure and environmental variables. For instance, NPs might have limited access or no workspace available to print a PDF and to complete a survey. Other reported cons of mixed-mode designs (sequential modes [web then telephone]) compared to single mode (telephone only) include higher missing data rates and more focal responses [ 34 ].

After randomization, we rigorously compared selected and nonselected NHs based on key NH level characteristics such as bed size, ownership, location, staffing hours, payer mix, and overall rating. Our analysis did not reveal statistically significant differences in these characteristics (See Table S1 in Multimedia Appendix 1 ).

The research study was funded in February 2023. Participant recruitment for the project began in June 2023. As of June 3, 2024, a total of 109 HIT maturity surveys and 83 NP surveys have been returned. About 69% of the HIT maturity surveys have been submitted using the electronic link and 27% were submitted after a QR code was sent to the administrator. About 95% of the NP surveys were returned with electronic survey links.

Our national study is the first to our knowledge to focus on NH HIT maturity and NP care environments where administrators and NPs work. Although NPs are a predominant provider in NHs [ 35 ], no study to date has focused on NP care environments and available resources (eg, technology) to this discipline, leading to limited understanding of how NPs conduct work, and how HIT maturity contributes to an NP’s ability to improve care and outcomes for NH residents with serious chronic conditions. Furthermore, a primary objective of this study is to provide evidence of how administrators and NPs codesign technologies that can transform care delivery in NHs. Our team anticipates that using mixed-modes will enhance our ability to work with participants at different stages of HIT maturity, which we believe is in an important factor in how care environments are perceived by employees (eg, NPs) in these settings.

To achieve this goal, we first must be able to maximize engagement in this survey research with strong representation by both NH administrators and NPs from all US states. Second, we must mitigate barriers to NH administrators and NPs accessing surveys so that they can participate. Finally, we must achieve acceptable response rates by generating different modes of support, providing choice and flexible means for NH administrators and NPs to participate in the survey process. In this protocol, we have identified mixed-mode recruitment strategies based on the expert opinion of experienced survey recruitment staff that should enable us to meet our goals and to achieve a representative national sample of NH administrators and NPs.

Limitations

This study may have limitations. In prior work, we have identified great variability in HIT capabilities among many NH's, such as access to external email and connectivity challenges where NH staff work [ 36 ]. Depending on the survey mode used during the data collection, this variation may create differences in response rates between facilities. We have incorporated various mixed-mode methods in this research protocol that should allow respondents to choose their preferred method and the ability to complete a survey considering their institutional characteristics. The use of mixed-modes has been shown to improve participation in survey research, thus reducing barriers for less well-resourced NHs (eg, NHs with lower HIT maturity levels). Less resourced NHs are typically those with greater resident ethnic and racial diversity [ 37 ], so improving their participation is critical to enhance representation of these communities, which is a benefit of the design.

Conclusions

This research protocol describes a study using 2 survey tools to measure HIT maturity and NP care environments in the US NHs as perceived by administrators and NPs. We have identified the pros and cons of survey recruitment strategies experienced by our team in past work. We reviewed evidence-based recruitment strategies using mixed-modes which are defined in the literature as methods that incorporate the use of 2 or more modes to recruit respondents. In this protocol, we have incorporated a delayed concurrent mode, sequential mode, and a concurrent mode to enhance engagement, mitigate barriers to survey access, and to increase response rates in collecting survey data both from NH administrators and NPs to have robust data for future analysis.

Acknowledgments

The authors wish to acknowledge Dr Richard Chan and Ms Hana Amer for the contributions in the initial stages of determining the steps in the research protocol. Research reported in this publication was supported by the National Institute on Aging of the National Institutes of Health (award R01AG080517). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Authors' Contributions

GLA, LP, YZ, MH, SK, AAN, KW, MBS, AK, TB, and ST contributed to the design, acquisition, interpretation, writing, and revision of this manuscript.

Conflicts of Interest

None declared.

Supplementary Table.

  • Silverman HD, Steen EB, Carpenito JN, Ondrula CJ, Williamson JJ, Fridsma DB. Domains, tasks, and knowledge for clinical informatics subspecialty practice: results of a practice analysis. J Am Med Inform Assoc. 2019;26(7):586-593. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • De Groot K, De Veer AJE, Paans W, Francke AL. Use of electronic health records and standardized terminologies: a nationwide survey of nursing staff experiences. Int J Nurs Stud. 2020;104:103523. [ CrossRef ] [ Medline ]
  • Jones TL, Baxter MAJ, Khanduja VA. A quick guide to survey research. Ann R Coll Surg Engl. 2013;95(1):5-7. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Abramson EL, McGinnis S, Moore J, Kaushal R, HITEC investigators. A statewide assessment of electronic health record adoption and health information exchange among nursing homes. Health Serv Res. 2014;49(1 Pt 2):361-372. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Jones CD, Jones J, Bowles KH, Flynn L, Masoudi FA, Coleman EA, et al. Quality of hospital communication and patient preparation for home health care: results from a statewide survey of home health care nurses and staff. J Am Med Dir Assoc. 2019;20(4):487-491. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Giai J, Boussat B, Occelli P, Gandon G, Seigneurin A, Michel P, et al. Hospital survey on patient safety culture (HSOPS): variability of scoring strategies. Int J Qual Health Care. 2017;29(5):685-692. [ CrossRef ] [ Medline ]
  • Van Vaerenbergh Y, Thomas TD. Response styles in survey research: a literature review of antecedents, consequences, and remedies. International Journal of Public Opinion Research. 2012;25(2):195-217. [ CrossRef ]
  • Alexander GL, Powell K, Deroche CB, Popejoy LL, Mosa ASM, Koopman R, et al. Building consensus toward a national nursing home information technology maturity model. J Am Med Inform Assoc. 2019;26(6):495-505. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alexander GL, Deroche CB, Powell KR, Mosa ASM, Popejoy L, Koopman RJ, et al. Development and pilot analysis of the nursing home health information technology maturity survey and staging model. Res Gerontol Nurs. 2022;15(2):93-99. [ CrossRef ] [ Medline ]
  • Krosnick JA, Presser S, Husbands-Fealing K, Ruggles S. The future of survey research: challenges and opportunities. Arlington VA: The National Science Foundation. 2015:1-163.
  • Schouten B, Brakel JVD, Buelens B, Giesen D, Luiten A, Meertens V. Designing Mixed-Mode Surveys. Mixed-Mode Official Surveys: Design and Analysis. Boca Raton FL. CRC Press; 2022.
  • Sammut R, Griscti O, Norman IJ. Strategies to improve response rates to web surveys: a literature review. Int J Nurs Stud. 2021;123:104058. [ CrossRef ] [ Medline ]
  • Sastry N, McGonagle KA. Switching from telephone to web-first mixed-mode data collection: results from the transition into adulthood supplement to the US panel study of income dynamics. J R Stat Soc Ser A Stat Soc. 2022;185(3):933-954. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Patrick ME, Couper MP, Jang BJ, Laetz V, Schulenberg JE, O'Malley PM, et al. Building on a sequential mixed-mode research design in the monitoring the future study. J Surv Stat Methodol. 2022;10(1):149-160. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Schouten BBJVD, Buelens B, Giesen D, Luiten A, Meertens V. Foreward to Mixed-Mode Official Surveys: Design and Analysis. Mixed-Mode Official Surveys: Design and Analysis. Boca Raton FL. CRC Press; 2022:3-8.
  • VanGeest J, Johnson TP. Surveying nurses: identifying strategies to improve participation. Eval Health Prof. 2011;34(4):487-511. [ CrossRef ] [ Medline ]
  • Ellis LA, Pomare C, Churruca K, Carrigan A, Meulenbroeks I, Saba M, et al. Predictors of response rates of safety culture questionnaires in healthcare: a systematic review and analysis. BMJ Open. 2022;12(9):e065320. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Beebe TJ, Jacobson RM, Jenkins SM, Lackore KA, Rutten LJF. Testing the impact of mixed-mode designs (Mail and Web) and multiple contact attempts within mode (Mail or Web) on clinician survey response. Health Serv Res. 2018;53 Suppl 1(Suppl Suppl 1):3070-3083. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • CDC. National Center for Health Statistics Nursing Home Care. Atlanta Georgia. Centers for Disease Control and Prevention/National Center for Health Statistics; 2017.
  • White E, Woodford E, Britton J, Newberry LW, Pabico C. Nursing practice environment and care quality in nursing homes. Nurs Manage. 2020;51(6):9-12. [ CrossRef ] [ Medline ]
  • CMS. Centers for medicare and medicaid nursing home compare. Centers for Medicare and Medicaid Sept. 2019. [ CrossRef ]
  • Adler-Milstein J, Raphael K, O'Malley TA, Cross DA. Information sharing practices between US hospitals and skilled nursing facilities to support care transitions. JAMA Netw Open. 2021;4(1):e2033980. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Burke RE, Phelan J, Cross D, Werner RM, Adler-Milstein J. Integration activities between hospitals and skilled nursing facilities: a national survey. J Am Med Dir Assoc. 2021;22(12):2565-2570.e4. [ CrossRef ] [ Medline ]
  • Farid M, Machta RM, Jones DJ, Furukawa MF, Miller D. Nursing Homes Affiliated with U.S. Health Systems. Rockville MD. Agency for Healthcare Research and Quality; 2018. URL: https://www.ahrq.gov/sites/default/files/wysiwyg/chsp/data/chsp-brief8-nursinghomes.pdf [accessed 2024-02-07]
  • CMS. Centers for Medicare and Medicaid. Washington DC. Centers for Medicare and Medicaid Special Focus Facility Program URL: https://www.cms.gov/files/document/sffpostingwithcandidatelist-october2023pdf.pdf [accessed 2023-12-09]
  • Alexander GL, Kueakomoldej S, Congdon C, Poghosyan L. A qualitative study exploring nursing home care environments where nurse practitioners work. Geriatr Nurs. 2023;50:44-51. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Van den Broeck J, Cunningham SA, Eeckels R, Herbst K. Data cleaning: detecting, diagnosing, and editing data abnormalities. PLoS Med. 2005;2(10):e267. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alexander G, Deroche C, Powell K, Mosa A, Popejoy L, Koopman R. Forecasting content and stage in a nursing home information technology maturity instrument using a delphi method. J Med Syst. 2020;44(3):60. [ CrossRef ] [ Medline ]
  • Poghosyan L, Nannini A, Finkelstein S, Mason E, Shaffer J. Development and psychometric testing of the nurse practitioner primary care organizational climate questionnaire. Nurs Res. 2013;62(5):325-334. [ CrossRef ] [ Medline ]
  • Bono C, Ried LD, Kimberlin C, Vogel B. Missing data on the center for epidemiologic studies depression scale: a comparison of 4 imputation techniques. Res Social Adm Pharm. 2007;3(1):1-27. [ CrossRef ] [ Medline ]
  • Williams JA, Vriniotis MG, Gundersen DA, Boden LI, Collins JE, Katz JN, et al. How to ask: Surveying nursing directors of nursing homes. Health Sci Rep. 2021;4(2):e304. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ofstedal M, Kézdi G, Couper M. Data quality and response distributions in a mixed-mode survey. Longit Life Course Stud. 2022;13(4):621-646. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rantz M, Popejoy L, Vogelsmeier A, Galambos C, Alexander G, Flesner M, et al. Reducing avoidable hospitalizations and improving quality in nursing homes with APRNs and interdisciplinary support: lessons learned. J Nurs Care Qual. 2018;33(1):5-9. [ CrossRef ] [ Medline ]
  • Alexander GL, Steege LM, Pasupathy KS, Wise K. Case studies of IT sophistication in nursing homes: a mixed method approach to examine communication strategies about pressure ulcer prevention practices. International Journal of Industrial Ergonomics. 2015;49:156-166. [ CrossRef ]
  • Sloane P, Yearby R, Konetzka R, Li Y, Espinoza R, Zimmerman S. Addressing systemic racism in nursing homes: a time for action. J Am Med Dir Assoc. 2021;22(4):886-892. [ CrossRef ] [ Medline ]

Abbreviations

health IT
institutional review board
nursing home
nursing practitioner
Nurse Practitioner Nursing Home Organizational Climate Questionnaire
Nurse Practitioner Nursing Home Organizational Climate Survey
Research Electronic Data Capture
special focus facility

Edited by S Ma; submitted 08.01.24; peer-reviewed by T Mujirishvili; comments to author 23.03.24; revised version received 22.04.24; accepted 28.06.24; published 29.08.24.

©Gregory L Alexander, Lusine Poghosyan, Yihong Zhao, Mollie Hobensack, Sergey Kisselev, Allison A Norful, John McHugh, Keely Wise, M Brooke Schrimpf, Ann Kolanowski, Tamanna Bhatia, Sabrina Tasnova. Originally published in JMIR Research Protocols (https://www.researchprotocols.org), 29.08.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Research Protocols, is properly cited. The complete bibliographic information, a link to the original publication on https://www.researchprotocols.org, as well as this copyright and license information must be included.

Institution of Engineering and Technology

  • Search term Advanced Search Citation Search
  • Individual login
  • Institutional login

IET Software

Understanding Work Rhythms in Software Development and Their Effects on Technical Performance

Jiayun Zhang

  • orcid.org/0000-0002-3562-5794

Shanghai Key Lab of Intelligent Information Processing , School of Computer Science , Fudan University , Shanghai , China , fudan.edu.cn

Corresponding Author

Qingyuan Gong

  • [email protected]
  • orcid.org/0000-0001-7942-8752

Research Institute of Intelligent Complex Systems , Fudan University , Shanghai , China , fudan.edu.cn

  • orcid.org/0000-0003-4749-3060
  • orcid.org/0000-0002-4517-3779

Department of Information and Communications Engineering , Aalto University , Espoo , Finland , aalto.fi

  • orcid.org/0000-0002-9405-4485

Aaron Yi Ding

  • orcid.org/0000-0003-4173-031X

Department of Engineering Systems and Services , Delft University of Technology , Delft , Netherlands , tudelft.nl

The temporal patterns of code submissions, denoted as work rhythms, provide valuable insight into the work habits and productivity in software development. In this paper, we investigate the work rhythms in software development and their effects on technical performance by analyzing the profiles of developers and projects from 110 international organizations and their commit activities on GitHub. Using clustering, we identify four work rhythms among individual developers and three work rhythms among software projects. Strong correlations are found between work rhythms and work regions, seniority, and collaboration roles. We then define practical measures for technical performance and examine the effects of different work rhythms on them. Our findings suggest that moderate overtime is related to good technical performance, whereas fixed office hours are associated with receiving less attention. Furthermore, we survey 92 developers to understand their experience with working overtime and the reasons behind it. The survey reveals that developers often work longer than required. A positive attitude towards extended working hours is associated with situations that require addressing unexpected issues or when clear incentives are provided. In addition to the insights from our quantitative and qualitative studies, this work sheds light on tangible measures for both software companies and individual developers to improve the recruitment process, project planning, and productivity assessment.

1. Introduction

The time allocation for work activities is closely related to a software developer’s daily routine and reflects her/his work habits. We define the work rhythms in the process of software development as the temporal patterns shown in developers’ code submission activities. A typical work rhythm of a developer could be described as follows: the developer may start the work at 9 a.m. on working days and concentrate on writing and submitting code during working hours. She/he would take a short break at noon for lunch and the code submissions could stop for a while as well. After finishing the tasks at 6 p.m., the codes will not be updated until 9 a.m. on the next working day. Developers working in companies with diverse cultures follow different work rhythms. It was reported that one third of software developers do not adopt a typical working hour rhythm (e.g., from 10 a.m. to 6 p.m.) [ 1 ]. The issues of developers’ work rhythms have been discussed extensively. Some Chinese tech companies have adopted an unofficial work schedule known as the “996 working hour system,” which requires employees to work from 9 a.m. to 9 p.m., 6 days a week. The public quickly took notice of these extreme working hours as they were shared on social media ( https://github.com/996icu/996.ICU ). This abnormal work schedule has received criticism, arguing that developers cannot keep focusing on programming during such long working hours and their efficiency and productivity decrease after working for long hours ( https://www.scmp.com/tech/start-ups/article/3005947/quantity-or-quality-chinas-996-work-culture-comes-under-scrutiny ). However, global leading news media, such as Cable News Network (CNN; https://edition.cnn.com/2019/04/15/business/jack-ma-996-china/index.html ) and British Broadcasting Corporation (BBC) News ( https://www.bbc.com/news/business-47934513 ), reported another voice that many successful entrepreneurs weighed on the advantages of long-hour work schedules to the companies. These heated discussions with controversial perspectives press an urge demand to understand developers’ work rhythms and their effects on practical technical performance.

Studying work rhythms in software development yields many important implications. For example, the profiles and activities in online developer communities are considered as reliable indicators of technical performance during the hiring process [ 2 ]. However, having more commits during off-hours does not necessarily equate to better code quality. Instead of assessing based on the quantity of commits, it is crucial to acquire a deeper understanding of work rhythms and their effects. Such insights can help employers gain deeper knowledge about job applicants’ work habits before hiring. In addition, software development teams can rely on more rational assessments of technical performance rather than judging merely by the time spent in the office. With an understanding of the effects of work rhythms on technical performance, both project teams and individual developers can better allocate and schedule their time in development.

The existing studies on the work rhythms of people in different occupations often cover their effects on work performance. Alternative work schedules, such as flexible and compressed work schedules, had positive effects on work-related criteria including productivity and job satisfaction [ 3 , 4 ]. Conversely, sustained work during long working hours was associated with an increased risk of errors and decreased work performance [ 5 , 6 , 7 , 8 , 9 ]. In the field of software engineering, multiple studies have examined the relationship between code quality and the time when the work is performed. It has been found that the bugginess of commits is related to the time (i.e., the hour of the day) when those commits have been made, but there are large variations among individuals and projects [ 10 , 11 , 12 ].

Previous studies have primarily focused on the effects of work hours on code quality, within the contexts of limited organizations and have primarily considered code bugginess as a quality metric. In addition, they have not sufficiently addressed the circadian and weekly patterns that characterize developers’ work habits. Our study leverages a large-scale real-world dataset from GitHub to explore how work rhythms correlate with multiple dimensions of technical performance. Considering that project-level working behaviors often involve collaborative efforts of multiple contributors and do not necessarily reflect the work patterns of individual developers, our study analyzes both project- (in our study, the term “project” is used synonymously with “repository”) and individual-level metrics. We aim to provide a more comprehensive understanding of work patterns from two different yet interconnected perspectives. Specifically, we apply spectral biclustering [ 13 ] to identify the work rhythms from both the individual and project perspectives. The biclustering algorithm simultaneously groups both rows and columns of a data matrix, allowing us to understand the groups of similar subjects (i.e., developers/repositories) and their typical commit behaviors at the same time. We analyze the relationship between the identified work rhythms and demographics (such as region and account/repository age) and collaboration roles (i.e., whether a developer is a structural hole spanner (SHS) [ 14 ]). We use popularity metrics (such as followers, stars, forks, and issues on GitHub) and code productivity (measured by lines of code changed per week) as indicators of technical performance. Then, we perform a comprehensive analysis to investigate how these work rhythms influence technical performance. Furthermore, we conduct a survey study to complement the results of empirical data analysis.

We design an approach with spectral biclustering algorithm to identify the work rhythms of repositories and individual developers. This method reveals four distinct work rhythms among individuals and three among repositories.

We present an empirical analysis of the correlations between work rhythms and demographics including regions, age, and collaboration roles. We define multiple practical measures for technical performance and study the effects of work rhythms on them.

We conduct a survey involving 92 respondents to gain insights into developers’ experiences and the reasons and attitudes towards overtime work.

We introduce the background and related works in Section 2 and research questions in Section 3 , followed by our research methods (Section 4 ) and results (Section 5 ). We discuss the significance of our contributions in Section 6 and offer some concluding remarks in Section 7 .

2. Background and Related Work

Developers are engaged in multiple work activities in a given week and follow some rules in the time usage in software development [ 15 , 16 , 17 ]. Sequential analysis of the generated contents is crucial for understanding the behavior patterns of online users [ 18 , 19 ]. The widely used development tools such as version control systems and online developer communities ensure the transparency of the workflows, which provide researchers with abundant resources to investigate developers’ work practices [ 20 , 21 , 22 , 23 ]. By exploring the data from these development tools, multiple studies have examined developers’ work practices and contributions.

First, the work time in software development has been studied. For example, Claes et al. [ 1 ] defined work rhythm as the circadian and weekly patterns of commits. They analyzed the commit timestamps of 86 open source software projects and reported that two-thirds of the developers follow a standard work schedule and rarely work nights and weekends. In addition, Traulle and Dalle [ 24 ] investigated the evolution of developers’ work rhythms. They observe a trend where developers adopt more regular work patterns over time and start working increasingly earlier. Furthermore, this study is related to our previous work [ 25 ], which examined the commit activities of tech companies in China and the United States and compared the differences in working hours between companies in the two countries. Compared with our previous work, this study expands the scope and introduces new research questions—the correlations between work rhythm and technical performance. In addition, we enlarge the dataset to include a wider range of regions and approach the analysis of working behaviors at more granular levels by examining both project- and individual-level behaviors.

Second, the relationships between work quality and work time have been investigated. For example, Khomh et al. [ 26 ] studied the impact of Firefox’s rapid release cycle on software quality. They found that the fast release cycle did not lead to more bugs but accelerated the process of fixing bugs. In addition, several studies focused on the relationships between the bugginess of code and the hour of the day when the code is submitted. For instance, Eyolfson et al. [ 10 , 11 ] studied three well-known open source projects and found that more bugs are contained in commits made during midnight and early morning, while commits made in the morning have the best quality. Prechelt and Pepper [ 12 ] investigated a closed-source industry project and proposed that 8 p.m. is the hour with the highest error rate. It is observed that results vary across different projects.

Previous research on the effects of work time often investigates projects from limited organizations and only considers the bugginess of code as the metric of code quality. In addition, these studies typically focus on the effects of specific hours of the day, rather than the circadian and weekly patterns. There is no sufficient investigation with solid evidence yet to show the relationship between work rhythms and technical performance from multiple aspects. In this paper, we perform data analysis on a real-world code submission dataset collected from GitHub, a prominent online developer platform with more than 100 million developers and hosting more than 420 million repositories ( https://github.com/about , accessed on May 18, 2024).

During the software development, people often use Git, a distributed version control system, to monitor the modifications to the code. To submit code changes to Git, people make commits that include details such as authorship, timestamp, and the code changes made. The temporal distribution of a developer’s commit logs reflects her/his rhythm of submitting code changes. These commit logs can be accessed if the projects are uploaded to GitHub and set to publicly visible. Figure 1 shows the time distribution of developers’ code submissions on GitHub. The statistics are generated according to the GitHub User Dataset [ 27 , 28 ]. The dataset consists of the information and activities about more than 10 million randomly selected GitHub users. We focus on the users who have more than 100 commits and have submitted codes on more than 100 different days. Among these users, we select 13,201 of the developers with 5,406,933 commits. In general, developers commit more frequently on weekdays than at weekends. There are peak hours of code submissions at 11 a.m., 4 p.m., and 10 p.m., and an off-peak period during the early morning, which conforms to the common sense of people’s daily life. The aggregated commit logs in Figure 1 show that developers exhibit temporal regularities in code submissions. However, given the differences in the adoption of work practices, such general work rhythm could not represent effectively the work habit of each developer.

Details are in the caption following the image

3. Research Questions

We aim to study the work rhythms of developers and software projects to have a comprehensive view of work rhythms in software development from both the individual and group levels. Our study is guided by the following four research questions:

RQ1. What are the work rhythms of individual developers and software projects?

RQ2. Are work rhythms related to demographics and collaboration roles?

The first two RQs intend to reveal representative work rhythms among individual developers and software projects and examine discrepancies in the demographics of the developers with different work rhythms.

RQ3. What are the correlations between different work rhythms and technical performance?

The third RQ is to seek a deeper understanding of the relationships of different work rhythms with the outcome of work by considering various metrics for technical performance.

RQ4. What are developers’ attitudes towards work rhythms and productivity?

The last RQ investigates developers’ actual work experience and their views on productivity.

In this section, we present the data collection and analysis methods in our study. A summary of the research subjects, variables, and the methods of data analysis for each research question is provided in Table 1 . The overview of the methodology is presented in Figure 2 .

Research question Subject Variable Analysis method
RQ1 Developer/repository Commit frequency during the week Spectral biclustering
  
RQ2 Developer Account creation time Mann–Whitney test
Structral hole spanner APGreedy, Pearson’s -squared test
Repository Regions Pearson’s -squared test
Repository creation time Mann–Whitney test
  
RQ3 Developer Number of followers Mann–Whitney test
Average number of stars
h-index of stars
Repository Number of stars Mann–Whitney test
Number of forks
Number of open issues
Lines of code changed per week
  
RQ4 Developer Required and actual working hours User study
Time allocation for work activities
Attitude towards working overtime

Details are in the caption following the image

4.1. Data Collection

The commit logs of public projects on GitHub are publicly visible and can be retrieved using the GitHub API. Our data collection adhered to “terms of service” of GitHub ( https://help.github.com/articles/github-terms-of-service/ ). The data collection took place from May 1 to May 27, 2019. The dataset covers the commit activities of the source repositories of 110 organizations ever since the repositories were created. The location of the companies spread a wide range from the United States (such as Facebook, Amazon, and Google) to China (such as Baidu, Tencent, and Alibaba) and Europe (such as SAP, Nokia, and Spotify). To accurately assess work rhythms, we used the local time of each commit log to avoid the potential influence of different time zones in which the commits were made. Commit logs without time zone information (9.03% of the total) were excluded. Following the data cleaning, a total of 1,532,439 commits remained. Then, we group these commits by repositories and committers respectively and form the following two datasets for our analysis.

Company repositories. We scanned the repository lists of the 110 organization accounts and crawled descriptive information about the repositories and commit logs submitted into the repositories. We selected repositories with at least 300 commits and formed the repository dataset with a total of 1,131 repositories and 1,111,685 commits.

Individual developers. To study the work rhythms of individual developers, we first merged different identities of the same developer, as a developer may have multiple identities on GitHub and in the version control system. We extracted the email from the version control system’s author field and GitHub account ID from the author field recorded in GitHub commit activity. We created a mapping from email addresses to GitHub accounts and grouped together identities that shared the same account ID or email address. Following this dealiasing process, 47.1% of the committer identities were merged. Then, we chose the core developers by selecting those with at least 30 commits. These developers are the top 12.5% of the committers and have made 85% out of all commits in our dataset. We further crawl the GitHub account information of the developers, including number of followers and number of stars in each of their own repositories. Finally, we formed our developer dataset with 7,509 individual developers and 1,296,715 commits, among which 2,754 have detailed information about their GitHub accounts.

4.2. Identifying Work Rhythms

To profile how commits are created by a developer or in a project repository, we compute the frequencies of commit activities across different time intervals and apply clustering to identify patterns.

4.2.1. Data Processing

4.2.2. biclustering model.

Among various classical clustering methods, such as K-means [ 30 ], DBSCAN [ 31 ], and the state-of-the-art ones designed for specific applications such as topic models (latent Dirichlet allocation) [ 32 , 33 ], we choose the spectral biclustering [ 13 ] algorithm to discover the work rhythms in our dataset. Spectral biclustering is a clustering technique, which generates biclusters—a group of samples (in row) that show similar behavior across a subset of features (in column), or vice versa. In our scenario, we group both developers/repositories and the commit behavior at a time to understand the groups of similar subjects and their typical behaviors. Specifically, developers/repositories grouped in different row clusters show different commit behaviors. In addition, the column clusters outputted by the algorithm enable us to infer how developers/repositories in different row clusters behave in each subset of hours. Developers/repositories with the same rhythm have similar commit frequencies in each subset of hours.

The model takes the 48-dimensional vectors as input and automatically discovers the clusters of work rhythms by measuring the similarities between them. To implement the clustering model, we used Scikit-learn [ 34 ], a widely used machine-learning library. To determine the optimal parameter setting, we perform an iterative search for the number of work rhythms k from 2 to 8 with empirical experiments. For each k , we visualize the rhythms and examine the number of samples in clusters to ensure that the clusters have sufficient individuals and exhibit distinct patterns beyond mere time shifting. We choose k as the largest value among those tested that yields stable and distinctive work rhythms.

4.3. Empirical Analysis on Identified Work Rhythms

4.3.1. demographics of developers and repositories.

We intend to explore whether developers or repositories with specific demographic information tend to follow specific work rhythms.

First, local cultures may have an impact on work rhythms. To investigate whether there is a difference among developers who work on repositories from different regions in terms of work rhythms, we examine the countries of the repositories that the developers worked on. For each developer, we group the repositories that she/he has made contributions to and check which countries the organizations of the repositories belong to. If a developer has contributions to repositories from more than one country, we set the work region of the developer as “multiple countries.” We target four different regions: the United States, China, Europe, and multiple countries.

In addition, considering the fact that senior developers may take charge of more projects than junior developers, we assume that senior developers have different work rhythms from young developers. For this purpose, we investigate whether there is a correlation between the type of work rhythms and the seniority of the developers. We use the number of days after the creation time of GitHub account as a proxy for one’s seniority in programming.

Furthermore, according to Vasilescu et al.’s [ 35 ] study, there are differences in terms of productivity between younger repositories and older ones. As a result, repositories with longer histories may have different work rhythms from newly created ones. We count the number of days since a project was created on GitHub as the measure of repository age.

4.3.2. Collaboration Role

Collaboration is an important feature of software engineering. The developer’s participation in project collaboration is a testament to her/his technical ability.

The structural hole theory [ 14 , 36 , 37 , 38 ] in social network analytics suggests that people who are positioned in structural holes, known as SHS, play a critical role in the collaboration and management of the teams. A structural hole is perceived as a gap between two closely connected groups. SHS fill in the gaps among different groups. They control the diffusion of valuable information across groups and come up with new ideas by combining ideas from multiple sources [ 14 ]. Bhowmik et al. [ 39 ] studied the role of structural holes in requirements identification of open-source software development and found that structural holes are positively related to the contribution of a larger amount of new requirements and play an important role in new requirement identification.

We intend to see whether there is a difference in terms of work rhythms between SHS developers and ordinary developers. We build a collaboration graph using our dataset, in which the node represents a developer and an edge between two nodes represents the two developers have committed to the same repository. We apply an advanced SHS identification algorithm called APGreedy [ 40 ] (there are several SHS identification algorithms [ 37 , 41 , 42 ] and APGreedy is a representative one) to find the SHS in the collaboration graph and choose the top 500 developers as the SHS developers. After filtering out developers with less than 30 commits, we obtain 246 SHS developers in total. Accordingly, we select 246 non-SHS developers from the rest using random sampling to represent the ordinary developers.

4.3.3. Developer-Level Measures on Technical Performance

We define the following measures for evaluating the technical performance of a developer:

Average number of stars. GitHub provides starring function for users to mark their interest in projects. We count the average number of stars received by the repositories owned by the developer. Receiving more stars indicates a higher popularity of a project [ 43 ].

Number of followers. We use the number of followers a GitHub user has at the time of data collection as a signal of standing [ 44 ] within the community. Users with lots of followers are influential in the developer community as many people are paying attention to their activities.

H-index of Stars. The h-index [ 45 ] was originally introduced as a metric to evaluate both the productivity and citation impact of a scholar’s research publications. It has been used to measure the influence of users’ generated contents in social networks [ 46 ]. We define h-index of a developer as the maximum value of c such that the given developer has published c repositories that have each been starred at least c times. We use this metric to measure both the productivity and influence of a developer on GitHub.

4.3.4. Repository-Level Measures of Technical Performance

To examine the technical performance of repositories, we define the following measures:

Number of stars. We use the number of stars a repository has received to evaluate the popularity of a repository. A repository with many stars implies that many people show their interests in it [ 35 , 47 ].

Number of forks. The “forking” function on GitHub enables developers to create a copy of a repository as their personal repository and then they can make changes to the code freely. Similar to the number of stars discussed above, the number of forks a repository has received is another important indicator that a repository is popular [ 35 , 44 , 48 ].

Number of open issues. Issues can be used to track bugs, enhancements, or other requests. In cases where the project’s problem was suspect, submitters and core members often engaged in extended discussions about the appropriateness of the code [ 49 , 50 ]. Repositories with more open issues receive more attention than those with less.

Lines of code changed per week (LOC changed ). This measure is defined as the average number of lines of code changed (the sum of additions and deletions) in all commits in a repository per week. It is a measure of outputs produced per unit time, which serves as a proxy for productivity [ 35 , 51 , 52 , 53 ].

4.3.5. Hypothesis Testing

To accurately identify behavioral differences among different populations, we conduct statistical hypothesis testing on different groups.

First, we conduct Pearson’s chi-squared test [ 54 ] to examine if there are significant differences in the work rhythms among different groups (i.e., regions and collaboration roles) of projects or developers. The Pearson’s chi-squared test is commonly used for evaluating the significance of the association between two categories in sets of categorical data.

Second, we statistically validate if there are significant differences in the demographics and technical performance among different groups of software projects and developers. We compute the measures of each subject within the group and the measures of the population outside the group. Then, we apply the Mann–Whitney U test [ 55 ], which is commonly used to determine whether two independent samples are from populations with the same distribution.

The results of Pearson’s chi-squared test and Mann–Whitney U test are measured by p -value, where a smaller p -value indicates higher significance level in rejecting the null hypothesis H 0 . A  p -value below 0.05 indicates a significant difference among the two populations in terms of the selected measure. Cramer’s V and Cliff’s delta effect size are used to supplement the results of Pearson’s chi-squared test and Mann–Whitney U test, respectively.

4.4. User Survey

To investigate how developers experience and think of their work rhythms and productivity, we designed an online survey and sent it to developers in selected tech companies. The selected companies included a mix of large corporations and startups.

Our survey was reviewed and approved by the Institute of Science and Technology, Fudan University. Prior to the launch of the survey, we invited seven developers from different tech companies and did a pilot test. These participants completed the questionnaire and provided feedback, which we used to refine the survey. Next, we performed an undeclared pilot test involving 10 participants from selected companies in our dataset. We reviewed and discussed their responses to ensure that the questionnaire was free of major issues. After finalizing the survey, we distributed it online and asked the pilot participants to share the link to the survey with others. The survey had 1,516 views and received 92 responses from eligible respondents who identified their current job as software development. The survey questions are given in the appendix.

First, to validate our result on work rhythms, we asked survey participants about their required working hours and actual working hours on a typical work day. The participants are required to provide both their required and actual start time and end time of work or to implicate there is no required working hours.

Next, we asked participants about the time they spent on different work activities and programming themes. According to Meyer et al. [ 56 ], developers primarily identified coding-related tasks as productive, whereas activities such as attending meetings, reading, and writing emails were often considered as unproductive. To gain insight into productivity both during and outside office hours, we asked participants to indicate the percentage of time they spent on various work activities during these periods, including coding, studying, project planning, writing documents, contacting colleagues, meeting, social activities, and others. Participants could choose one among the following five options to indicate the percentage of time they spent on each work activity or programming theme: “less than 5%,” “between 5% and 20%,” “between 20% and 35%,” “between 35% and 50%,” and “more than 50%.” In addition, according to Meyer et al.’s [ 56 ] work, different types of programming tasks impact productivity differently. For instance, activities such as development and bug fixing were perceived as productive, whereas testing was considered as unproductive. We also asked participants about the percentage of time they spent on different programming themes in off-hours, using the same options as in the previous question. We asked participants to specify the detailed information if they had been involved in activities or programming theme other than those we listed.

Moreover, to understand whether developers believe extra working hours can contribute to productivity, we included a question asking whether extra working hours increase productivity. Participants were given the option to select either “agree,” “neutral,” or “disagree.” Then, we cross checked their ideas with their motivations for working overtime. Beckers et al. [ 57 ] proposed that the outcome of extra working hours was affected by motivation. Highly motivated workers might have more active attitude towards extra working hours. To see how participants’ perspectives on extra working hours differ with motivations, we included a multiple-choice question, listing nine common reasons for working overtime. These options were derived from initial interviews with several developers, who explained why they worked overtime. Their reasons were used as initial options in pilot tests. During the pilot tests, participants were asked to provide additional reasons if theirs were not listed. We then reviewed their answers and adjusted the options to ensure that the given reasons covered all cases. Finally, we concluded nine reasons from their responses, such as (1) handling emergencies (such as application crashes), (2) meeting deadlines, (3) making up for the time wasted on programming-independent work activities during office hours, (4) taxi reimbursement (some companies covered the taxi expenses within specific hours), (5) good environment of company (such as free snacks and air conditioners), (6) peer pressure (participants mentioned they stayed in the office after work because most of their colleagues did not leave), (7) company requirements, (8) enjoying coding in spare time, and (9) working for bonus. One or more options could be selected. Participants could also specify their reasons if they are not given as options.

5.1. RQ1. What Are the Work Rhythms of Projects and Developers?

5.1.1. work rhythms of developers.

We apply clustering analysis on the commit behavior of developers in our dataset. Four work rhythms are detected among the developers in our dataset. We visualize the four detected work rhythms in the form of heatmap, as shown in Figures 3(a) , 3(b) , 3(c) , and 3(d) , with the x -axis representing the hours and the y -axis representing the days in a week. The color intensity of each time slot shows the aggregated commit frequency among developers, where darker color indicates higher commit frequencies. The detected work rhythms exhibit unique characteristics. The 48 hr in weekdays and weekends are divided into four subsets, as shown in Table 2 . We observe the commit behavior in the subsets of hours and summarize the following characteristics:

Details are in the caption following the image

Subset Weekday Weekend
1 9 a.m. to 5 p.m.
2 7 p.m. to 12 a.m. (mid night) 3 p.m. to 11 p.m.
3 9 a.m. to 2 p.m. and 12 a.m.
4 1 a.m. to 8 a.m. and 6 p.m. 1 a.m. to 8 a.m.
  • The 48 hr in weekdays and weekends are divided into four time subsets. Developers with the same rhythm have the same degree of commit frequency in each time subset. For example, as shown in Figure 3(a) , developers with rhythm #1 made commits at a high frequency during 9 a.m. to 5 p.m. on weekdays (i.e., time subset #1), whereas they have much fewer commits during the other time subsets.

#1: Nine-to-five worker. As shown in Figure 3(a) , developers with work rhythm #1 concentrate on programming during regular office hours (9 a.m. to 5 p.m.) on weekdays. They submit code changes less frequently after work hours or on weekends.

#2: Flex timers. As shown in Figure 3(b) , the code submissions of developers with rhythm #2 are uniformly distributed on almost every hour on weekdays. Developers with this rhythm are likely to submit code changes at any time of the day and do not display fixed work and rest time.

#3: Overnight developers. As shown in Figure 3(c) , developers with rhythm #3 submit their codes from 9 a.m. to 12 a.m. They also make code submissions on weekends following a similar daily working schedule as weekday, whereas the commit frequency on weekends is lower than that on weekdays.

#4: Off-hour developers. As shown in Figure 3(d) , the peak time of the code submissions of developers with rhythm #4 is weekday nights and weekends, instead of regular working hours on weekdays.

5.1.2. Work Rhythms of Projects

We also apply clustering analysis on the commit behavior of repositories. Three work rhythms are detected among the repositories in our dataset. Figures 4(a) and 4(b) present the temporal distributions of commit frequency for identified rhythms. The 48 hr in weekdays and weekends are divided into three subsets, as shown in Table 3 . We summarize the features of the three identified rhythms as follows:

Details are in the caption following the image

Subset Weekday Weekend
1 9 a.m. to 5 p.m.
2 7 p.m. to 12 a.m. (midnight) 9 a.m. to 12 a.m. (midnight)
3 1 a.m. to 8 a.m. and 6 p.m. 1 a.m. to 8 a.m.
  • The 48 hr in weekdays and weekends are divided into three subsets.

#1: Typical office hours. Repositories with work rhythm #1 adopt typical work time, usually from 9 a.m. to 5 p.m on weekdays. Code changes are rarely submitted into those repositories on weekends.

#2: Slightly extended working hours. Repositories with rhythm #2 extend the typical work time to 6 p.m. on weekdays. Compared with developers in rhythm #1, repositories with rhythm #2 usually have more code submissions on weekends.

#3: Working over night and weekend. Repositories with rhythm #3 endure longer working hours than the other two rhythms. Developers of these repositories work equally on weekdays and weekends, starting from nine in the morning to the midnight.

The percentage of developers and repositories in each detected work rhythm is shown in Figures 5(a) and 5(b) , respectively. Among the four work rhythms detected in the developer dataset, we observe that about two-thirds of the developers follow rhythm #1 (typical working hours), which conforms to Claes et al.’s [ 1 ] finding. Among the three work rhythms detected in the repository dataset, rhythm #1 covers half of the repositories and rhythm #2 takes up 40% repositories, and the rest 10% repositories follow rhythm #3.

Details are in the caption following the image

5.2. RQ2. Are Work Rhythms Related to Demographics and Collaboration Role?

Do work rhythms vary across different regions? We examine the work regions of the developers. The percentages of developers per rhythm in each region are shown in Figure 6 . Developers working for organizations in the United States and Europe mainly follow rhythm #1, whereas rhythms #3 and #4 are more prevalent among developers working for organizations in China or “multiple countries”. We divide developers into two groups according to their work regions: the United States and Europe as a group and China and “multiple countries” as another group. We apply chi-square test to check the frequency of the two groups in each of the four rhythms. We find a significant difference between the two groups of developers in terms of the four work rhythms ( p -value < 0.001, Cramer’s V  = 0.325).

Details are in the caption following the image

Is there a correlation between work rhythm and developer seniority? We investigate the account age of developers in each rhythm and perform Mann–Whitney U test. Figure 7(a) shows the account ages of the developers for each work rhythm in box plots. Developers with rhythms #3 ( p -value < 0.001, Cliff’s delta d  = 0.20) and #4 ( p -value = 0.004, d  = 0.13) tend to create their GitHub accounts earlier than those with other rhythms, which indicates that developers with rhythms #3 and #4 start to be engaged in software development earlier than those with the other two rhythms. Developers with rhythm #1 created their GitHub accounts later than others ( p -value < 0.001, d  = −0.20).

Details are in the caption following the image

Is there a correlation between work rhythm and project maturity? We investigate the repository age in each rhythm and perform Mann–Whitney U test. As shown in Figure 7(b) , repositories with the three rhythms do not show significant difference in terms of repository ages ( p -values > 0.05).

Do SHS developers have specific work rhythms? The percentages of developers in each rhythm among SHS developers and ordinary developers are shown in Figure 8 . There are more developers with rhythm #1 and fewer developers with rhythm #3 among ordinary developers than among SHS developers. We apply chi-square test and find a significant difference between SHS and non-SHS developers in terms of rhythms #1 and #3 ( p -value = 0.006, Cramer’s V  = 0.128). Compared with ordinary developers, SHS developers tend to be overnight developers rather than work in fixed office hours.

Details are in the caption following the image

5.3. RQ3. What Are the Correlations between Different Work Rhythms and Technical Performance?

Next, we examine the effects of work rhythms on various measures of technical performance. Figures 9(a) , 9(b) , and 9(c) present the performance on the three measures for developers. We perform Mann–Whitney U test and the results are shown in Table 4 . The value in each entry of the table is the ratio between the median value of the measures within the group and outside the group. A less than 1 value indicates that the developers with the selected rhythm have smaller value in the chosen measure and a higher than 1 value means otherwise. In addition,   ∗ marks the difference is significant with p -value ≤ 0.05,   ∗∗ marks p -value ≤ 0.01 and   ∗∗∗ marks p -value ≤ 0.001. As shown in Table 4 , developers with rhythms #3 and #4 had more followers (Cliff’s delta d  = 0.30 and 0.16 respectively), received more stars from their own repositories ( d  = 0.228 and 0.158 respectively) and had higher h-indexes ( d  = 0.239 and 0.169 respectively). In contrast, developers with rhythm #1 perform the worst in all three measures: average number of stars ( d  = −0.235), number of followers ( d  = −0.282), and h-index ( d  = −0.243).

Details are in the caption following the image

Rhythm Average number of stars Number of followers h-Index
#1 0.30 0.34 0.50
#2 1.25 0.63 1.00
#3 3.12 2.95 2.00
#4 2.17 1.85 2.00
  •   ∗∗∗ marks difference is significant with p -value ≤ 0.001.

We also examine the effect of repositories’ work rhythms on technical performance and apply Mann–Whitney U test. The results are shown in Figures 10(a) , 10(b) , 10(c) , and 10(d) and Table 5 . Repositories with rhythm #2 receive more stars ( d  = 0.085) and have more forks ( d  = 0.090) than those with the other two rhythms. Repositories with rhythm #3 receive more stars than others ( d  = 0.151). As for the number of open issues, there is no significant difference among the three work rhythms.

Rhythm Number of stars Number of forks Number of open issues LOC
#1 0.51 0.71 1.00 1.17
#2 1.66 1.50 1.03 0.91
#3 1.55 0.99 0.93 0.78
  •   ∗ marks the difference is significant with p -value ≤ 0.05,   ∗∗ marks p -value ≤ 0.01, and   ∗∗∗ marks p -value ≤ 0.001.

It is interesting to find that although repositories with rhythm #1 have larger LOC changed than those with the other two rhythms, their values of the other measures of technical performance including stars ( d  = −0.133) and forks ( d  = −0.10) turn out to be lower. To discover the reason for this phenomenon, we further check the number of lines of code added and deleted per commit in each hour of a day. As shown in Figures 11(a) and 11(b) , during the typical office hours, both the lines of code added and deleted per commit submitted into repositories with rhythm #1 are larger than those with the other two rhythms. During 4.–5 p.m. the sizes of the commits are the largest among commits in all hours of the day. The commit sizes peak between 4 and 5 p.m., suggesting a hypothesis that developers working on repositories with rhythm #1 may submit larger commits just before leaving the office to finish their workday on time. However, this practice might lead to lower code quality, necessitating deletions and rewrites the next day. As a result, these repositories have more frequent code changes, but their stars and forks are fewer.

Details are in the caption following the image

5.4. RQ4. What Are Developers’ Attitudes on Work Rhythm and Productivity?

5.4.1. required working hour vs. actual working hour.

We ask participants about their companies’ required working hour and their actual working hour on a typical work day. As shown in Figure 12 , most participants reply that their companies require an 8-hr work day schedule. However, they usually work longer hours than required.

Details are in the caption following the image

5.4.2. Content Switch between Office Hours and Off-Hours

Figure 13 presents the distribution of activities during office hours and off-hours. Coding occupies the majority of time in both periods. The rankings for time spent on different tasks are mostly consistent, except for meetings and studying. During the office hours, meetings rank the third and the sixth respectively, whereas, during the off-hours, studying moves up to the second and meetings drop to the sixth. As shown in Figure 14 , the most common programming activity during off-hours is developing, followed by testing, bug fixing, and creating backups.

Details are in the caption following the image

5.4.3. Perspectives on Productivity in Extra Working Hours

Except for 25 participants (27.17%) who claim they do not work extra hours, 38 participants (41.30%) believe that additional working hours enhance productivity, 26 participants (28.26%) believe that additional work time does not boost productivity, and three participants (3.26%) are neutral.

We ask participants why they work overtime. Among all the options, “deadline” receives the most votes (33.3%). “Emergency” is the second most popular reason with 32.3% responses. In addition, 24.7% mention that they work overtime to make up for the time wasted on programming-independent work activities during office hours, 19.4% say that their companies require extra working hours, 16.1% agree that they work overtime because of peer pressure, 15.1% claim that they work overtime because they enjoy coding in their spare time, 7.5% say that they stay in the office after work because their companies provide good environment, and 6.5% mention that they work overtime because their companies provide taxi reimbursement. Only 1.1% say because of the bonus that their companies offer for overtime work.

We cross-check their motivations and views on the productivity of additional working hours. The results are shown in Figure 15 , in which the height of a rectangle represents the proportion of participants who agree on the option and the flow represents the proportion of participants who agree on both the two options on each side. According to the results, more respondents agree extra working hours could increase productivity if they work overtime for emergencies (19 agree and 8 disagree), deadlines (18 agree and 10 disagree), making up for the time wasted on programming-independent work activities (13 agree and 10 disagree), taxi reimbursement (4 agree and 2 disagree), or good environment of their companies (3 agree and 2 disagree). In contrast, fewer respondents agree with the idea if they work overtime because of the company’s requirements (8 agree and 9 disagree), peer pressures (7 agree and 8 disagree), or bonus (0 agrees and 1 disagrees). Among the respondents who work overtime because they enjoy coding in their spare time, the numbers of participants who hold both views are the same (4 agree and 4 disagree).

Details are in the caption following the image

6. Discussion

6.1. implications for software practice.

The purpose of this paper is to investigate the work rhythms in software development and their effects on technical performance. We identify four typical work rhythms in the developer dataset. The typical working hours (from 9 a.m. to 5 p.m. on weekdays) cover 64% of developers in the dataset. The rest three rhythms represent an aperiodic work rhythm, an overnight work rhythm, and an off-hour work rhythm, respectively. In addition, three work rhythms are detected among repositories in the dataset. There are one typical work rhythm covering half of the repositories and two different types of overtime work rhythm.

Work rhythms are correlated with demographics and collaboration roles. Work rhythms with moderately extended working hours are more popular among senior developers. The maturity of a repository does not decrease the chance of requiring its developers to work extra hours. Developers who bridge collaboration groups consist of a higher proportion of “overnight developers” than others.

Work rhythms with a moderate amount of extended working hours appear to be associated with good technical performance. According to our results, projects and developers following the work rhythms with moderate hours of overtime work (rhythms #3 and #4 in developers’ rhythms and rhythms #2 and #3 in repositories’ rhythms) turn out to have better work performance than those following other rhythms. Projects and developers following fixed-hour work rhythms (rhythm #1 in developers’ rhythms and rhythm #1 in repositories’ rhythms) show poorer technique performance. Developers who follow aperiodic work rhythm (rhythm #2 in developers’ rhythms) do not present better performance than others.

Developers’ perspectives on productivity in extended working hours are influenced by their motivations of working overtime. They would feel extended working hours increase their productivity when the time for coding is insufficient due to some unexpected arrangments (such as approaching deadline) or the companies give clear incentives (such as reimbursing taxi fares), while fewer believe that extended working hours could increase productivity if they are under the requirement of companies, or work for bonus, or just follow the other colleagues to work overtime. Tech companies and teams could benefit from practices, for example, not forcing the members to work extra hours, and providing employees with better work environment and clear incentives.

6.2. Limitations and Threats to Validity

Being a first study to reveal work rhythms in software development and their effects on technical performance, there are a few limitations in our work. First, the data analysis in our study is limited to public open-source projects hosted on GitHub. Therefore, our conclusions are specific to the open-source projects and their contributors. Although our findings demonstrate notable distinctions between work rhythms, we cannot guarantee their broader applicability to the entire industry, as comprehensive data on a wider range of companies and closed-source projects would be necessary. We notice that there are alternative platforms such as GitLab where organizations release their work projects in a timely way. In addition, while we aim to capture an authentic snapshot of developer activity in open-source projects by forming an actual distribution of repositories in the companies, the variation in the number of repositories across these companies could potentially introduce bias into the results. In future work, we plan to explore other data sources to validate and expand our findings.

Second, our quantitative analysis on the work rhythms primarily focuses on the commit activities. Analysis on more comprehensive dataset could better reveal of rules of one field of research [ 58 ]. Other activities, such as meetings and document writing, also occupy developers’ working hours; therefore, the time spent on programming might not fully represent their work schedule. However, because programming is a major task of developers’ work, the temporal pattern of commits is a strong indicator of work time and our findings could provide insights into developers’ working status. We also acknowledge that there might be a delay between the time of making commits and the actual time of completing coding tasks. However, because our analysis is based on aggregated commits rather than individual ones, the impact of such delays should be negligible.

Third, the metrics that we use to measure the technical performance are indirect. For developers, we use average number of stars, number of followers, and h-index of stars as indicators of their reputations. For repositories, we consider number of stars, number of forks, and number of issues as proxies for user attention. More user attention and discussions mean that the repositories and developers are recognized by more people, which indicates their good technical performance. In addition, we use the lines of code changed per week to measure code productivity. Although these measures are intuitively reasonable, they could only show technical performance in some way. More metrics such as code quality should be addressed to obtain a comprehensive understanding of the technical performance.

7. Conclusions and Future Work

In this paper, we aim to discover work rhythms in software development and investigate their effects on technical performance. We found four work rhythms among individuals and three work rhythms among repositories in our dataset. The findings indicate that developers working for organizations in China or multiple countries tend to follow long-hour work rhythms, whereas those working for organizations in the United States and Europe tend to follow the typical work rhythm. Regarding the effects of work rhythms on technical performance, we found that a moderate amount of overtime work is related to good technical performance, whereas fixed office hours appear to be associated with projects and developers who receive less attention. In addition, our survey study indicates that developers usually tend to work longer than their companies’ required working hours. A positive attitude towards overtime work is often linked to situations that require addressing unexpected issues, such as approaching deadlines, or when clear incentives are provided.

For future work, we aim to delve deeper into the underlying mechanisms behind developers’ work. We wish to understand the underlying causes for different working rhythms by considering the interplay between work rhythms and other factors, such as technical roles and collaboration patterns. Furthermore, we plan to investigate the causal relationship between work rhythms and technical performance by conducting experimentation and incremental studies.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work has been sponsored by National Natural Science Foundation of China (nos. 62072115 and 62102094), Shanghai Science and Technology Innovation Action Plan Project (no. 22510713600), European Union’s Horizon 2020 Research and Innovation Programme under the grant agreement no. 101021808, and Marie Skłodowska Curie grant agreement no. 956090.

The User Survey

What is the country of your company?

How long have you been employed at your current company?

What is the type of your current job? (e.g., development, testing, product management, etc.)

What is your company’s designated working hour for workdays? (Please fill in the start and end time in 24-hr format.)

What are your actual working hour for workdays? (Please fill in the start and end time in 24-hr format.)

I work on both Saturday and Sunday every weekend

I work on either Saturday or Sunday every weekend

I sometimes work on weekends (less than once a week, please specify how many days per month on average)

I never work on weekends

Other (please specify)

Most of my colleagues work overtime.

My company provides benefits for overtime worker.

I enjoy working overtime.

I work during holidays.

I work more before/after holidays.

Project planning

Reading/writing documents and preparing reports

Handling other work tasks, e.g., reading/writing emails, etc.

Learning software, tools, skills, etc.

Business entertainment, e.g., hosting colleagues, etc.

Leisure activities

Development

I do not work overtime.

Handling emergencies (such as application crashes).

Making up for the time wasted on programming-independent work activities during office hours.

Company requirements.

Peer pressure (most of my colleagues have not left).

Enjoying coding in spare time.

The company provides good environment, e.g., free smacks and air conditioners.

The company provides taxi reimbursements within specific hours.

Working for bonus.

Other. (Please specify the reason.)

Agree—overall, working overtime increases my work output.

Disagree—overtime work does not compensate for my extra working hours.

Open Research

Data availability.

As the data used in this work are publicly visible and accessible on GitHub, researchers interested in accessing the data can retrieve it directly from the GitHub platform with its official API. To ensure transparency and facilitate further research, the list of organizations and repositories in our dataset is publicly available on GitHub: https://github.com/jiayunz/Work-Rhythms-in-Software-Development . Researchers can refer to this repository to gain access to the specific projects and repositories included in the dataset. For any inquiries or requests related to the dataset, researchers can contact the corresponding author through email.

  • 1 Claes M. , Mäntylä M. , Kuutila M. , and Adams B. , Do programmers work at night or during the weekend? , Proceedings of the 40th International Conference on Software Engineering , 2018, IEEE, 705 – 715 . Google Scholar
  • 2 Marlow J. and Dabbish L. , Activity traces and signals in software developer recruitment and hiring , Proceedings of the 2013 Conference on Computer Supported Cooperative Work , 2013, ACM, 145 – 156 . Google Scholar
  • 3 Baltes B. B. , Briggs T. E. , Huff J. W. , Wright J. A. , and Neuman G. A. , Flexible and compressed workweek schedules: a meta-analysis of their effects on work-related criteria , Journal of Applied Psychology . ( 1999 ) 84 , no. 4, 496 – 513 , https://doi.org/10.1037/0021-9010.84.4.496 , 2-s2.0-0033247275. 10.1037/0021-9010.84.4.496 Web of Science® Google Scholar
  • 4 Smith L. , Folkard S. , Tucker P. , and Macdonald I. , Work shift duration: a review comparing eight hour and 12 hour shift systems , Occupational and Environmental Medicine . ( 1998 ) 55 , no. 4, 217 – 229 , https://doi.org/10.1136/oem.55.4.217 , 2-s2.0-0031917695. 10.1136/oem.55.4.217 CAS PubMed Web of Science® Google Scholar
  • 5 Krueger G. P. , Sustained work, fatigue, sleep loss and performance: a review of the issues , Work & Stress . ( 1989 ) 3 , no. 2, 129 – 141 , https://doi.org/10.1080/02678378908256939 , 2-s2.0-0024318997. 10.1080/02678378908256939 Web of Science® Google Scholar
  • 6 Josten E. J. , Ng-A-Tham J. E. , and Thierry H. , The effects of extended workdays on fatigue, health, performance and satisfaction in nursing , Journal of Advanced Nursing . ( 2003 ) 44 , no. 6, 643 – 652 , https://doi.org/10.1046/j.0309-2402.2003.02854.x , 2-s2.0-0346728933. 10.1046/j.0309-2402.2003.02854.x PubMed Web of Science® Google Scholar
  • 7 Lockley S. W. , Barger L. K. , Ayas N. T. , Rothschild J. M. , Czeisler C. A. , and Landrigan C. P. , Effects of health care provider work hours and sleep deprivation on safety and performance , The Joint Commission Journal on Quality and Patient Safety . ( 2007 ) 33 , no. 11, 7 – 18 , https://doi.org/10.1016/S1553-7250(07)33109-7 , 2-s2.0-36649031201. 10.1016/S1553-7250(07)33109-7 PubMed Google Scholar
  • 8 Richardson A. , Turnock C. , Harris L. , Finley A. , and Carson S. , A study examining the impact of 12-hour shifts on critical care staff , Journal of Nursing Management . ( 2007 ) 15 , no. 8, 838 – 846 , https://doi.org/10.1111/j.1365-2934.2007.00767.x , 2-s2.0-35348959451. 10.1111/j.1365-2934.2007.00767.x CAS PubMed Google Scholar
  • 9 Keller S. M. , Berryman P. , and Lukes E. , Effects of extended work shifts and shift work on patient safety, productivity, and employee health , AAOHN Journal . ( 2009 ) 57 , no. 12, 497 – 504 , https://doi.org/10.1177/216507990905701204 . 10.1177/216507990905701204 PubMed Web of Science® Google Scholar
  • 10 Eyolfson J. , Tan L. , and Lam P. , Do time of day and developer experience affect commit bugginess? , Proceedings of the 8th Working Conference on Mining Software Repositories , 2011, ACM, 153 – 162 . Google Scholar
  • 11 Eyolfson J. , Tan L. , and Lam P. , Correlations between bugginess and time-based commit characteristics , Empirical Software Engineering . ( 2014 ) 19 , no. 4, 1009 – 1039 , https://doi.org/10.1007/s10664-013-9245-0 , 2-s2.0-84901846775. 10.1007/s10664-013-9245-0 Google Scholar
  • 12 Prechelt L. and Pepper A. , Why software repositories are not used for defect-insertion circumstance analysis more often: a case study , Information and Software Technology . ( 2014 ) 56 , no. 10, 1377 – 1389 , https://doi.org/10.1016/j.infsof.2014.05.001 , 2-s2.0-84905091993. 10.1016/j.infsof.2014.05.001 Web of Science® Google Scholar
  • 13 Kluger Y. , Basri R. , Chang J. T. , and Gerstein M. , Spectral biclustering of microarray data: coclustering genes and conditions , Genome Research . ( 2003 ) 13 , no. 4, 703 – 716 , https://doi.org/10.1101/gr.648603 , 2-s2.0-0037399130. 10.1101/gr.648603 CAS PubMed Web of Science® Google Scholar
  • 14 Burt R. S. , Structural Holes: The Social Structure of Competition , 2009 , Harvard University Press. Google Scholar
  • 15 Perry D. E. , Staudenmayer N. A. , and Votta L. G. , Understanding and improving time usage in software development , Software Process . ( 1995 ) 5 , 111 – 135 . Google Scholar
  • 16 LaToza T. D. , Venolia G. , and DeLine R. , Maintaining mental models: a study of developer work habits , Proceedings of the 28th International Conference on Software Engineering , 2006, ACM, 492 – 501 . Google Scholar
  • 17 Fu E. , Zhuang Y. , Zhang J. , Zhang J. , and Chen Y. , Understanding the user interactions on GitHub: a social network perspective , Proceedings of CSCWD , 2021, IEEE, 1148 – 1153 . Google Scholar
  • 18 Gong Q. , Chen Y. , He X. , Zhuang Z. , Wang T. , Huang H. , Wang X. , and Fu X. , DeepScan: exploiting deep learning for malicious account detection in location-based social networks , IEEE Communications Magazine . ( 2018 ) 56 , no. 11, 21 – 27 , https://doi.org/10.1109/MCOM.2018.1700575 , 2-s2.0-85054599203. 10.1109/MCOM.2018.1700575 Web of Science® Google Scholar
  • 19 He X. , Gong Q. , Chen Y. , Zhang Y. , Wang X. , and Fu X. , DatingSec: detecting malicious accounts in dating apps using a content-based attention network , IEEE Transactions on Dependable and Secure Computing . ( 2021 ) 18 , no. 5, 2193 – 2208 , https://doi.org/10.1109/TDSC.8858 . 10.1109/TDSC.8858 Web of Science® Google Scholar
  • 20 Saini M. and Kaur K. , Fuzzy analysis and prediction of commit activity in open source software projects , IET Software . ( 2016 ) 10 , no. 5, 136 – 146 , https://doi.org/10.1049/iet-sen.2015.0087 , 2-s2.0-84990211234. 10.1049/iet-sen.2015.0087 Web of Science® Google Scholar
  • 21 Javeed F. , Siddique A. , Munir A. , Shehzad B. , and Lali M. I. U. , Discovering software developer’s coding expertise through deep learning , IET Software . ( 2020 ) 14 , no. 3, 213 – 220 , https://doi.org/10.1049/iet-sen.2019.0290 . 10.1049/iet-sen.2019.0290 Web of Science® Google Scholar
  • 22 Aljemabi M. A. , Wang Z. , and Saleh M. A. , Mining social collaboration patterns in developer social networks , IET Software . ( 2020 ) 14 , no. 7, 839 – 849 , https://doi.org/10.1049/iet-sen.2019.0316 . 10.1049/iet-sen.2019.0316 Google Scholar
  • 23 Sajedi-Badashian A. and Stroulia E. , Investigating the information value of different sources of evidence of developers’ expertise for bug assignment in open-source projects , IET Software . ( 2020 ) 14 , no. 7, 748 – 758 , https://doi.org/10.1049/iet-sen.2019.0384 . 10.1049/iet-sen.2019.0384 Google Scholar
  • 24 Traullé B. and Dalle J.-M. , The evolution of developer work rhythms , International Conference on Social Informatics , 2018, Springer, 420 – 438 . Google Scholar
  • 25 Zhang J. , Chen Y. , Gong Q. , Wang X. , Ding A. Y. , Xiao Y. , and Hui P. , Understanding the working time of developers in IT companies in China and the United States , IEEE Software . ( 2021 ) 38 , no. 2, 96 – 106 , https://doi.org/10.1109/MS.2020.2988022 . 10.1109/MS.2020.2988022 Google Scholar
  • 26 Khomh F. , Adams B. , Dhaliwal T. , and Zou Y. , Understanding the impact of rapid releases on software quality , Empirical Software Engineering . ( 2015 ) 20 , no. 2, 336 – 373 , https://doi.org/10.1007/s10664-014-9308-x , 2-s2.0-84928707650. 10.1007/s10664-014-9308-x Google Scholar
  • 27 Gong Q. , Zhang J. , Chen Y. , Li Q. , Xiao Y. , Wang X. , and Hui P. , Detecting malicious accounts in online developer communities using deep learning , Proceedings of the 28th ACM International Conference on Information and Knowledge Management , 2019, 1251 – 1260 . Google Scholar
  • 28 Gong Q. , Liu Y. , Zhang J. , Chen Y. , Li Q. , Xiao Y. , Wang X. , and Hui P. , Detecting malicious accounts in online developer communities using deep learning , IEEE Transactions on Knowledge and Data Engineering . ( 2023 ) 35 , no. 10, 10633 – 10649 , https://doi.org/10.1109/TKDE.2023.3237838 . 10.1109/TKDE.2023.3237838 Google Scholar
  • 29 Goyal R. , Ferreira G. , Kästner C. , and Herbsleb J. , Identifying unusual commits on GitHub , Journal of Software: Evolution and Process . ( 2018 ) 30 , no. 1, e1893. 10.1002/smr.1893 Web of Science® Google Scholar
  • 30 MacQueen J. , Classification and analysis of multivariate observations , Fifth Berkeley Symposium on Mathematical Statistics and Probability , 1967, Los Angeles, LA, USA, University of California, 281 – 297 . Google Scholar
  • 31 Ester M. , Kriegel H.-P. , Sander J. , and Xu X. , A density-based algorithm for discovering clusters in large spatial databases with noise , 2nd International Conference on Knowledge Discovery and Data Mining , 1996, ACM, 226 – 231 . Google Scholar
  • 32 Cheng Z. , Trépanier M. , and Sun L. , Probabilistic model for destination inference and travel pattern mining from smart card data , Transportation . ( 2021 ) 48 , no. 4, 2035 – 2053 , https://doi.org/10.1007/s11116-020-10120-0 . 10.1007/s11116-020-10120-0 Web of Science® Google Scholar
  • 33 Li Z. , Yan H. , Zhang C. , and Tsung F. , Individualized passenger travel pattern multi-clustering based on graph regularized tensor latent dirichlet allocation , Data Mining and Knowledge Discovery . ( 2022 ) 36 , no. 4, 1247 – 1278 , https://doi.org/10.1007/s10618-022-00842-3 . 10.1007/s10618-022-00842-3 Google Scholar
  • 34 Pedregosa F. , Varoquaux G. , Gramfort A. , Michel V. , Thirion B. , Grisel O. , Blondel M. , Prettenhofer P. , Weiss R. , Dubourg V. , Vanderplas J. , Passos A. , Cournapeau D. , Brucher M. , Perrot M. , and Duchesnay É. , Scikit-learn: machine learning in python , Journal of Machine Learning Research . ( 2011 ) 12 , no. Oct, 2825 – 2830 . Web of Science® Google Scholar
  • 35 Vasilescu B. , Yu Y. , Wang H. , Devanbu P. , and Filkov V. , Quality and productivity outcomes relating to continuous integration in GitHub , Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering , 2015, ACM, 805 – 816 . Google Scholar
  • 36 Burt R. S. , Kilduff M. , and Tasselli S. , Social network analysis: foundations and frontiers on advantage , Annual Review of Psychology . ( 2013 ) 64 , no. 1, 527 – 547 , https://doi.org/10.1146/annurev-psych-113011-143828 , 2-s2.0-84872461605. 10.1146/annurev-psych-113011-143828 PubMed Web of Science® Google Scholar
  • 37 Lin Z. , Zhang Y. , Gong Q. , Chen Y. , Oksanen A. , and Ding A. Y. , Structural hole theory in social network analysis: a review , IEEE Transactions on Computational Social Systems . ( 2022 ) 9 , no. 3, 724 – 739 , https://doi.org/10.1109/TCSS.2021.3070321 . 10.1109/TCSS.2021.3070321 Web of Science® Google Scholar
  • 38 Li W. , Xu Z. , Sun Y. , Gong Q. , Chen Y. , Ding A. Y. , Wang X. , and Hui P. , DeepPick: a deep learning approach to unveil outstanding users with public attainable features , IEEE Transactions on Knowledge and Data Engineering . ( 2023 ) 35 , no. 1, 291 – 306 . Google Scholar
  • 39 Bhowmik T. , Niu N. , Singhania P. , and Wang W. , On the role of structural holes in requirements identification: an exploratory study on open-source software development , ACM Transactions on Management Information Systems . ( 2015 ) 6 , no. 3, 1 – 30 , https://doi.org/10.1145/2795235 , 2-s2.0-84942051662. 10.1145/2795235 Google Scholar
  • 40 Xu W. , Rezvani M. , Liang W. , Yu J. X. , and Liu C. , Efficient algorithms for the identification of top- structural hole spanners in large social networks , IEEE Transactions on Knowledge and Data Engineering . ( 2017 ) 29 , no. 5, 1017 – 1030 . 10.1109/TKDE.2017.2651825 Web of Science® Google Scholar
  • 41 Gong Q. , Zhang J. , Wang X. , and Chen Y. , Identifying structural hole spanners in online social networks using machine learning , Proceedings of the ACM SIGCOMM. 2019 Conference Posters and Demos , 2019, 93 – 95 . Google Scholar
  • 42 Gao M. , Li Z. , Li R. , Cui C. , Chen X. , Ye B. , Li Y. , Gu W. , Gong Q. , Wang X. , and Chen Y. , EasyGraph: a multifunctional, cross-platform, and effective library for interdisciplinary network analysis , Patterns . ( 2023 ) 4 , no. 10, https://doi.org/10.1016/j.patter.2023.100839 , 100839. 10.1016/j.patter.2023.100839 Google Scholar
  • 43 Tsay J. , Dabbish L. , and Herbsleb J. , Influence of social and technical factors for evaluating contribution in GitHub , Proceedings of the 36th International Conference on Software Engineering , 2014, ACM, 356 – 366 . Google Scholar
  • 44 Dabbish L. , Stuart C. , Tsay J. , and Herbsleb J. , Social coding in GitHub: transparency and collaboration in an open software repository , Proceedings of the ACM. 2012 conference on Computer Supported Cooperative Work , 2012, ACM, 1277 – 1286 . Google Scholar
  • 45 Hirsch J. E. , An index to quantify an individual’s scientific research output , Proceedings of the National Academy of Sciences of the United States of America . ( 2005 ) 102 , no. 46, 16569 – 16572 , https://doi.org/10.1073/pnas.0507655102 , 2-s2.0-28044445101. 10.1073/pnas.0507655102 CAS PubMed Web of Science® Google Scholar
  • 46 Gong Q. , Chen Y. , He X. , Xiao Y. , Hui P. , Wang X. , and Fu X. , Cross-site prediction on social influence for cold-start users in online social networks , ACM Transactions on the Web . ( 2021 ) 15 , no. 2, 1 – 23 , https://doi.org/10.1145/3409108 . 10.1145/3409108 Google Scholar
  • 47 Borges H. , Hora A. , and Valente M. T. , Understanding the factors that impact the popularity of GitHub repositories , 2016 IEEE International Conference on Software Maintenance and Evolution , 2016, IEEE, 334 – 344 . Google Scholar
  • 48 Jiang J. , Lo D. , He J. , Xia X. , Kochhar P. S. , and Zhang L. , Why and how developers fork what from whom in GitHub , Empirical Software Engineering . ( 2017 ) 22 , no. 1, 547 – 578 , https://doi.org/10.1007/s10664-016-9436-6 , 2-s2.0-84973174936. 10.1007/s10664-016-9436-6 Web of Science® Google Scholar
  • 49 Tsay J. , Dabbish L. , and Herbsleb J. , Let’s talk about it: evaluating contributions through discussion in GitHub , Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering , 2014, ACM. Google Scholar
  • 50 Vale G. , Schmid A. , Santos A. R. , De Almeida E. S. , and Apel S. , On the relation between GitHub communication activity and merge conflicts , Empirical Software Engineering . ( 2020 ) 25 , no. 1, 402 – 433 , https://doi.org/10.1007/s10664-019-09774-x , 2-s2.0-85073927730. 10.1007/s10664-019-09774-x Google Scholar
  • 51 Vasilescu B. , Blincoe K. , Xuan Q. , Casalnuovo C. , Damian D. , Devanbu P. , and Filkov V. , The sky is not the limit: multitasking across GitHub projects , Proceedings of the 38th International Conference on Software Engineering , 2016, IEEE, 994 – 1005 . Google Scholar
  • 52 Dieste O. , Aranda A. M. , Uyaguari F. , Turhan B. , Tosun A. , Fucci D. , Oivo M. , and Juristo N. , Empirical evaluation of the effects of experience on code quality and programmer productivity: an exploratory study , Empirical Software Engineering . ( 2017 ) 22 , no. 5, 2457 – 2542 , https://doi.org/10.1007/s10664-016-9471-3 , 2-s2.0-85011654161. 10.1007/s10664-016-9471-3 Web of Science® Google Scholar
  • 53 Oliveira E. , Fernandes E. , Steinmacher I. , Cristo M. , Conte T. , and Garcia A. , Code and commit metrics of developer productivity: a study on team leaders perceptions , Empirical Software Engineering . ( 2020 ) 25 , no. 4, 2519 – 2549 , https://doi.org/10.1007/s10664-020-09820-z . 10.1007/s10664-020-09820-z Web of Science® Google Scholar
  • 54 Pearson K. , X. On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling , The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science . ( 2009 ) 50 , no. 302, 157 – 175 , https://doi.org/10.1080/14786440009463897 . 10.1080/14786440009463897 Google Scholar
  • 55 Mann H. B. and Whitney D. R. , On a test of whether one of two random variables is stochastically larger than the other , The Annals of Mathematical Statistics . ( 1947 ) 18 , no. 1, 50 – 60 . 10.1214/aoms/1177730491 Web of Science® Google Scholar
  • 56 Meyer A. N. , Fritz T. , Murphy G. C. , and Zimmermann T. , Software developers’ perceptions of productivity , Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering , 2014, ACM, 19 – 29 . Google Scholar
  • 57 Beckers D. G. J. , van der Linden D. , Smulders P. G. W. , Kompier M. A. J. , van Veldhoven M. J. P. M. , and van Yperen N. W. , Working overtime hours: relations with fatigue, work motivation, and the quality of work , Journal of Occupational and Environmental Medicine . ( 2004 ) 46 , no. 12, 1282 – 1289 . PubMed Web of Science® Google Scholar
  • 58 Wu J. , Ye B. , Gong Q. , Oksanen A. , Li C. , Qu J. , Tian F. F. , Li X. , and Chen Y. , Characterizing and understanding development of social computing through DBLP: a data-driven analysis , Journal of Social Computing . ( 2022 ) 3 , no. 4, 287 – 302 , https://doi.org/10.23919/JSC.2022.0018 . 10.23919/JSC.2022.0018 Google Scholar

research paper survey results

Information

research paper survey results

Log in to Wiley Online Library

Change password, your password must have 10 characters or more:.

  • a lower case character, 
  • an upper case character, 
  • a special character 

Password Changed Successfully

Your password has been changed

Create a new account

Forgot your password.

Enter your email address below.

Please check your email for instructions on resetting your password. If you do not receive an email within 10 minutes, your email address may not be registered, and you may need to create a new Wiley Online Library account.

Request Username

Can't sign in? Forgot your username?

Enter your email address below and we will send you your username

If the address matches an existing account you will receive an email with instructions to retrieve your username

  • Share on twitter
  • Share on facebook

Global Academic Reputation Survey 2024 launching soon

Survey results will help to fuel  the ’s series of rankings in 2024.

  • Share on linkedin
  • Share on mail

A hand holding a clutch of gold medals

The world’s largest invitation-only academic opinion survey, Times Higher Education ’s global Academic Reputation Survey, is being launched in early November.

The Academic Reputation Survey 2024, available in 12 languages, will be distributed by THE and use benchmark data as a guide to ensure that the response coverage is as representative of world scholarship as possible. 

The annual questionnaire targets only experienced, published scholars, who offer their views on excellence in research and teaching within their disciplines and at institutions with which they are familiar. Invitations will be spread across a three-month period. 

The survey results will help to fuel THE ’s series of rankings in 2024. 

Scholars are asked to use their discipline-specific knowledge to name up to 15 universities that they believe are the best in both research and teaching. The survey, which typically takes up to 15 minutes to complete, will close at the end of January 2024. 

The headline results of the survey will be shared with respondents. More detailed analysis will then be published in THE ’s World Reputation Rankings 2024. The survey data from 2023 and 2024 will also be used alongside 15 objective indicators to help create the THE World University Rankings 2025, to be published in late 2024, and all WUR subsidiary rankings.

The survey also provides a uniquely rich picture of the changing global academic reputation of institutions to inform THE ’s editorial analyses and data and analytics tools. 

The survey is strictly invitation-only; universities cannot make nominations or supply contact lists, and individuals cannot nominate themselves for participation. 

Please check your inbox for an invitation from [email protected] .

If you are selected to take part in the survey, you have been chosen based on a proven record of research publication and will be representing thousands of your peers in your discipline and your country. Please take the opportunity to provide your expert input and help us develop a uniquely rich perspective on global higher education.

Read the World Reputation Rankings 2023 methodology

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter

Or subscribe for unlimited access to:

  • Unlimited access to news, views, insights & reviews
  • Digital editions
  • Digital access to THE’s university and college rankings analysis

Already registered or a current subscriber? Login

Related articles

research paper survey results

The craft of reputation building and maintenance

Institutions must be more strategic about positioning themselves and their work when considering new markets and partnerships, says Tania Rhodes-Taylor

You might also like

Impact Rankings logo

Impact Rankings 2025: time to register

Universities can now sign up to participate ahead of data collection opening in the autumn

aerial view of crowd

Record number of universities submit data for World University Rankings

African participation surpasses North America for the first time

Boat with flock of birds around it to suggest stewardship

Impact Rankings 2024: universities make trustworthy stewards

Locale determines the mission of many universities, and the Impact Rankings recognise how such diversity and community-mindedness helps to further progress towards the SDGs

Boccia World Cup - Preview Day - University of Ulster

Impact Rankings 2024: who is excelling at stewardship and outreach?

New thematic analysis reveals the countries and institutions that are walking the talk on sustainability 

Featured jobs

research paper survey results

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • How to Write a Literature Review | Guide, Examples, & Templates

How to Write a Literature Review | Guide, Examples, & Templates

Published on January 2, 2023 by Shona McCombes . Revised on September 11, 2023.

What is a literature review? A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic .

There are five key steps to writing a literature review:

  • Search for relevant literature
  • Evaluate sources
  • Identify themes, debates, and gaps
  • Outline the structure
  • Write your literature review

A good literature review doesn’t just summarize sources—it analyzes, synthesizes , and critically evaluates to give a clear picture of the state of knowledge on the subject.

Instantly correct all language mistakes in your text

Upload your document to correct all your mistakes in minutes

upload-your-document-ai-proofreader

Table of contents

What is the purpose of a literature review, examples of literature reviews, step 1 – search for relevant literature, step 2 – evaluate and select sources, step 3 – identify themes, debates, and gaps, step 4 – outline your literature review’s structure, step 5 – write your literature review, free lecture slides, other interesting articles, frequently asked questions, introduction.

  • Quick Run-through
  • Step 1 & 2

When you write a thesis , dissertation , or research paper , you will likely have to conduct a literature review to situate your research within existing knowledge. The literature review gives you a chance to:

  • Demonstrate your familiarity with the topic and its scholarly context
  • Develop a theoretical framework and methodology for your research
  • Position your work in relation to other researchers and theorists
  • Show how your research addresses a gap or contributes to a debate
  • Evaluate the current state of research and demonstrate your knowledge of the scholarly debates around your topic.

Writing literature reviews is a particularly important skill if you want to apply for graduate school or pursue a career in research. We’ve written a step-by-step guide that you can follow below.

Literature review guide

Don't submit your assignments before you do this

The academic proofreading tool has been trained on 1000s of academic texts. Making it the most accurate and reliable proofreading tool for students. Free citation check included.

research paper survey results

Try for free

Writing literature reviews can be quite challenging! A good starting point could be to look at some examples, depending on what kind of literature review you’d like to write.

  • Example literature review #1: “Why Do People Migrate? A Review of the Theoretical Literature” ( Theoretical literature review about the development of economic migration theory from the 1950s to today.)
  • Example literature review #2: “Literature review as a research methodology: An overview and guidelines” ( Methodological literature review about interdisciplinary knowledge acquisition and production.)
  • Example literature review #3: “The Use of Technology in English Language Learning: A Literature Review” ( Thematic literature review about the effects of technology on language acquisition.)
  • Example literature review #4: “Learners’ Listening Comprehension Difficulties in English Language Learning: A Literature Review” ( Chronological literature review about how the concept of listening skills has changed over time.)

You can also check out our templates with literature review examples and sample outlines at the links below.

Download Word doc Download Google doc

Before you begin searching for literature, you need a clearly defined topic .

If you are writing the literature review section of a dissertation or research paper, you will search for literature related to your research problem and questions .

Make a list of keywords

Start by creating a list of keywords related to your research question. Include each of the key concepts or variables you’re interested in, and list any synonyms and related terms. You can add to this list as you discover new keywords in the process of your literature search.

  • Social media, Facebook, Instagram, Twitter, Snapchat, TikTok
  • Body image, self-perception, self-esteem, mental health
  • Generation Z, teenagers, adolescents, youth

Search for relevant sources

Use your keywords to begin searching for sources. Some useful databases to search for journals and articles include:

  • Your university’s library catalogue
  • Google Scholar
  • Project Muse (humanities and social sciences)
  • Medline (life sciences and biomedicine)
  • EconLit (economics)
  • Inspec (physics, engineering and computer science)

You can also use boolean operators to help narrow down your search.

Make sure to read the abstract to find out whether an article is relevant to your question. When you find a useful book or article, you can check the bibliography to find other relevant sources.

You likely won’t be able to read absolutely everything that has been written on your topic, so it will be necessary to evaluate which sources are most relevant to your research question.

For each publication, ask yourself:

  • What question or problem is the author addressing?
  • What are the key concepts and how are they defined?
  • What are the key theories, models, and methods?
  • Does the research use established frameworks or take an innovative approach?
  • What are the results and conclusions of the study?
  • How does the publication relate to other literature in the field? Does it confirm, add to, or challenge established knowledge?
  • What are the strengths and weaknesses of the research?

Make sure the sources you use are credible , and make sure you read any landmark studies and major theories in your field of research.

You can use our template to summarize and evaluate sources you’re thinking about using. Click on either button below to download.

Take notes and cite your sources

As you read, you should also begin the writing process. Take notes that you can later incorporate into the text of your literature review.

It is important to keep track of your sources with citations to avoid plagiarism . It can be helpful to make an annotated bibliography , where you compile full citation information and write a paragraph of summary and analysis for each source. This helps you remember what you read and saves time later in the process.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research paper survey results

To begin organizing your literature review’s argument and structure, be sure you understand the connections and relationships between the sources you’ve read. Based on your reading and notes, you can look for:

  • Trends and patterns (in theory, method or results): do certain approaches become more or less popular over time?
  • Themes: what questions or concepts recur across the literature?
  • Debates, conflicts and contradictions: where do sources disagree?
  • Pivotal publications: are there any influential theories or studies that changed the direction of the field?
  • Gaps: what is missing from the literature? Are there weaknesses that need to be addressed?

This step will help you work out the structure of your literature review and (if applicable) show how your own research will contribute to existing knowledge.

  • Most research has focused on young women.
  • There is an increasing interest in the visual aspects of social media.
  • But there is still a lack of robust research on highly visual platforms like Instagram and Snapchat—this is a gap that you could address in your own research.

There are various approaches to organizing the body of a literature review. Depending on the length of your literature review, you can combine several of these strategies (for example, your overall structure might be thematic, but each theme is discussed chronologically).

Chronological

The simplest approach is to trace the development of the topic over time. However, if you choose this strategy, be careful to avoid simply listing and summarizing sources in order.

Try to analyze patterns, turning points and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred.

If you have found some recurring central themes, you can organize your literature review into subsections that address different aspects of the topic.

For example, if you are reviewing literature about inequalities in migrant health outcomes, key themes might include healthcare policy, language barriers, cultural attitudes, legal status, and economic access.

Methodological

If you draw your sources from different disciplines or fields that use a variety of research methods , you might want to compare the results and conclusions that emerge from different approaches. For example:

  • Look at what results have emerged in qualitative versus quantitative research
  • Discuss how the topic has been approached by empirical versus theoretical scholarship
  • Divide the literature into sociological, historical, and cultural sources

Theoretical

A literature review is often the foundation for a theoretical framework . You can use it to discuss various theories, models, and definitions of key concepts.

You might argue for the relevance of a specific theoretical approach, or combine various theoretical concepts to create a framework for your research.

Like any other academic text , your literature review should have an introduction , a main body, and a conclusion . What you include in each depends on the objective of your literature review.

The introduction should clearly establish the focus and purpose of the literature review.

Depending on the length of your literature review, you might want to divide the body into subsections. You can use a subheading for each theme, time period, or methodological approach.

As you write, you can follow these tips:

  • Summarize and synthesize: give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: don’t just paraphrase other researchers — add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically evaluate: mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: use transition words and topic sentences to draw connections, comparisons and contrasts

In the conclusion, you should summarize the key findings you have taken from the literature and emphasize their significance.

When you’ve finished writing and revising your literature review, don’t forget to proofread thoroughly before submitting. Not a language expert? Check out Scribbr’s professional proofreading services !

This article has been adapted into lecture slides that you can use to teach your students about writing a literature review.

Scribbr slides are free to use, customize, and distribute for educational purposes.

Open Google Slides Download PowerPoint

If you want to know more about the research process , methodology , research bias , or statistics , make sure to check out some of our other articles with explanations and examples.

  • Sampling methods
  • Simple random sampling
  • Stratified sampling
  • Cluster sampling
  • Likert scales
  • Reproducibility

 Statistics

  • Null hypothesis
  • Statistical power
  • Probability distribution
  • Effect size
  • Poisson distribution

Research bias

  • Optimism bias
  • Cognitive bias
  • Implicit bias
  • Hawthorne effect
  • Anchoring bias
  • Explicit bias

A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .

It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.

There are several reasons to conduct a literature review at the beginning of a research project:

  • To familiarize yourself with the current state of knowledge on your topic
  • To ensure that you’re not just repeating what others have already done
  • To identify gaps in knowledge and unresolved problems that your research can address
  • To develop your theoretical framework and methodology
  • To provide an overview of the key findings and debates on the topic

Writing the literature review shows your reader how your work relates to existing research and what new insights it will contribute.

The literature review usually comes near the beginning of your thesis or dissertation . After the introduction , it grounds your research in a scholarly field and leads directly to your theoretical framework or methodology .

A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other  academic texts , with an introduction , a main body, and a conclusion .

An  annotated bibliography is a list of  source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a  paper .  

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

McCombes, S. (2023, September 11). How to Write a Literature Review | Guide, Examples, & Templates. Scribbr. Retrieved August 29, 2024, from https://www.scribbr.com/dissertation/literature-review/

Is this article helpful?

Shona McCombes

Shona McCombes

Other students also liked, what is a theoretical framework | guide to organizing, what is a research methodology | steps & tips, how to write a research proposal | examples & templates, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

COMMENTS

  1. How to Create a Survey Results Report (+7 Examples to Steal)

    How to Write a Survey Results Report. Let's walk through some tricks and techniques with real examples. 1. Use Data Visualization. The most important thing about a survey report is that it allows readers to make sense of data. Visualizations are a key component of any survey summary.

  2. How to Write a Results Section

    Checklist: Research results 0 / 7. I have completed my data collection and analyzed the results. I have included all results that are relevant to my research questions. I have concisely and objectively reported each result, including relevant descriptive statistics and inferential statistics. I have stated whether each hypothesis was supported ...

  3. Reporting Research Results in APA Style

    Reporting Research Results in APA Style | Tips & Examples. Published on December 21, 2020 by Pritha Bhandari.Revised on January 17, 2024. The results section of a quantitative research paper is where you summarize your data and report the findings of any relevant statistical analyses.. The APA manual provides rigorous guidelines for what to report in quantitative research papers in the fields ...

  4. Survey Results: How To Analyze Data and Report on Findings

    How quantilope streamlines the analysis and presentation of survey results quantilope's automated Consumer Intelligence Platform saves clients from the tedious, manual processes of traditional market research , offering an end-to-end resource for questionnaire setup, real-time fielding, automated charting, and AI-assisted reporting.

  5. PDF Results Section for Research Papers

    The results section of a research paper tells the reader what you found, while the discussion section tells the reader what your findings mean. The results section should present the facts in an academic and unbiased manner, avoiding any attempt at analyzing or interpreting the data. Think of the results section as setting the stage for the ...

  6. Research Results Section

    Research results refer to the findings and conclusions derived from a systematic investigation or study conducted to answer a specific question or hypothesis. These results are typically presented in a written report or paper and can include various forms of data such as numerical data, qualitative data, statistics, charts, graphs, and visual aids.

  7. How to Write a Survey Report (with Pictures)

    If you don't need a specific style, make sure that the formatting for the paper is consistent throughout. Use the same spacing, font, font size, and citations throughout the paper. 3. Adopt a clear, objective voice throughout the paper. Remember that your job is to report the results of the survey.

  8. How to Write the Results Section of a Research Paper

    Build coherence along this section using goal statements and explicit reasoning (guide the reader through your reasoning, including sentences of this type: 'In order to…, we performed….'; 'In view of this result, we ….', etc.). In summary, the general steps for writing the Results section of a research article are:

  9. How to Write the Results Section: Guide to Structure and Key ...

    The ' Results' section of a research paper, like the 'Introduction' and other key parts, attracts significant attention from editors, reviewers, and readers. The reason lies in its critical role — that of revealing the key findings of a study and demonstrating how your research fills a knowledge gap in your field of study. Given its importance, crafting a clear and logically ...

  10. 7. The Results

    For most research papers in the social and behavioral sciences, there are two possible ways of organizing the results. Both approaches are appropriate in how you report your findings, but use only one approach. Present a synopsis of the results followed by an explanation of key findings. This approach can be used to highlight important findings.

  11. Reporting Survey Based Studies

    Abstract. The coronavirus disease 2019 (COVID-19) pandemic has led to a massive rise in survey-based research. The paucity of perspicuous guidelines for conducting surveys may pose a challenge to the conduct of ethical, valid and meticulous research. The aim of this paper is to guide authors aiming to publish in scholarly journals regarding the ...

  12. Doing Survey Research

    Survey research means collecting information about a group of people by asking them questions and analysing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout. Distribute the survey.

  13. Survey Research

    Survey research means collecting information about a group of people by asking them questions and analyzing the results. To conduct an effective survey, follow these six steps: Determine who will participate in the survey. Decide the type of survey (mail, online, or in-person) Design the survey questions and layout.

  14. How to analyze survey data: Methods & examples

    How to analyze survey data. Learn how SurveyMonkey can help you analyze your survey data effectively, as well as create better surveys with ease. The results are back from your online surveys. Now it's time to tap the power of survey data analysis to make sense of the results and present them in ways that are easy to understand and act on ...

  15. How to Build a Survey Results Report

    It states the hypothesis, question, or issues at hand for why the research was conducted and how the results plan to be used. 5. Survey Method. This section reviews who the target audience was and who the survey actually included. It also reviews how surveyors contacted respondents and the process of data collection. This is often a more ...

  16. How to Analyze and Present Survey Results

    1. Create a Presentation. While many times you'll put together a document, one-pager or infographic to visualize survey results, sometimes a presentation is the perfect format. Create a survey presentation like the one below to share your findings with your team. 1 / 8.

  17. Survey Analysis in 2023: How to Analyze Results [3 Examples]

    Below we give just a few examples of types of software you could use to analyze survey data. Of course, these are just a few examples to illustrate the types of functions you could employ. 1. Thematic software. As an example, with Thematic's software solution you can identify trends in sentiment and particular themes.

  18. How to Frame and Explain the Survey Data Used in a Thesis

    Surveys are a special research tool with strengths, weaknesses, and a language all of their own. There are many different steps to designing and conducting a survey, and survey researchers have specific ways of describing what they do.This handout, based on an annual workshop offered by the Program on Survey Research at Harvard, is geared toward undergraduate honors thesis writers using survey ...

  19. How to Analyze Survey Results Like a Data Pro

    5. Include the methodology of your research. The methodology section of your report should explain exactly how your survey was conducted, who was invited to participate, and the types of tests used to analyze the data. You might use charts or graphs to help communicate this data.

  20. How to Present Survey Results Using Infographics

    Take the five- and ten-point Likert and NPS scales and summarize them into simpler three-point scales ("disagree", "neutral", and "agree" or "positive", "neutral", and "negative"). Source. Presenting survey results in a simplified categories goes a long way in making the chart easier to read. 3. Demographic results.

  21. 5 Examples of How to Present Survey Results to Stakeholders

    Here are five common ways to present your survey results to businesses, stakeholders, and customers. 1. Graphs and Charts. Graphs and charts summarize survey results in a quick, easy graphic for people to understand. Some of the most common types of graphs include: Bar graphs are the most popular way to display results.

  22. Guides: CWP: Craft of Prose: Researching the White Paper

    Typically that research begins in popular culture--social media, surveys, interviews, newspapers. Once the author has a handle on how the problem is being defined and experienced, its history and its impact, what people in the trenches believe might be the best or worst ways of addressing it, the author then will turn to academic scholarship as ...

  23. PDF Effective Use of Web-based Survey Research Platforms

    make certain changes to your survey after it is published. Please note, you should be cautious about . making changes after collecting data. Certain survey alterations can invalidate or delete parts of your . already collected data. Changes to existing surveys must have IRB approval before being distributed.

  24. JMIR Research Protocols

    Background: Survey-driven research is a reliable method for large-scale data collection. Investigators incorporating mixed-mode survey designs report benefits for survey research including greater engagement, improved survey access, and higher response rate. Mix-mode survey designs combine 2 or more modes for data collection including web, phone, face-to-face, and mail.

  25. Understanding Work Rhythms in Software Development and Their Effects on

    Furthermore, we conduct a survey study to complement the results of empirical data analysis. ... In this paper, we aim to discover work rhythms in software development and investigate their effects on technical performance. ... The data collected from the survey were used for research purposes only and for overall analysis. The survey questions ...

  26. Questionnaire Design

    Questionnaires vs. surveys. A survey is a research method where you collect and analyze data from a group of people. A questionnaire is a specific tool or instrument for collecting the data.. Designing a questionnaire means creating valid and reliable questions that address your research objectives, placing them in a useful order, and selecting an appropriate method for administration.

  27. Kamala Harris surges ahead of Donald Trump in 2024 election poll

    This was the first survey since independent candidate Robert F. Kennedy Jr. dropped out of the race and endorsed Trump. Independent Cornel West is now at 2%. Independent Cornel West is now at 2%.

  28. Global Academic Reputation Survey 2024 launching soon

    The world's largest invitation-only academic opinion survey, Times Higher Education's global Academic Reputation Survey, is being launched in early November. The Academic Reputation Survey 2024, available in 12 languages, will be distributed by THE and use benchmark data as a guide to ensure that the response coverage is as representative of world scholarship as possible.

  29. Climate policies that achieved major emission reductions: Global ...

    Assembling such a global stocktake of effective climate policy interventions is so far hampered by two main obstacles: First, even though there is a plethora of data on legislative frameworks and pledged national emission reductions (8-10), systematic and cross-nationally comparable data about the specific types and mixes of implemented policy instruments are lacking.

  30. How to Write a Literature Review

    A literature review is a survey of scholarly sources on a specific topic. It provides an overview of current knowledge, allowing you to identify relevant theories, methods, and gaps in the existing research that you can later apply to your paper, thesis, or dissertation topic. There are five key steps to writing a literature review: