Home Blog Design Understanding Data Presentations (Guide + Examples)

Understanding Data Presentations (Guide + Examples)

Cover for guide on data presentation by SlideModel

In this age of overwhelming information, the skill to effectively convey data has become extremely valuable. Initiating a discussion on data presentation types involves thoughtful consideration of the nature of your data and the message you aim to convey. Different types of visualizations serve distinct purposes. Whether you’re dealing with how to develop a report or simply trying to communicate complex information, how you present data influences how well your audience understands and engages with it. This extensive guide leads you through the different ways of data presentation.

Table of Contents

What is a Data Presentation?

What should a data presentation include, line graphs, treemap chart, scatter plot, how to choose a data presentation type, recommended data presentation templates, common mistakes done in data presentation.

A data presentation is a slide deck that aims to disclose quantitative information to an audience through the use of visual formats and narrative techniques derived from data analysis, making complex data understandable and actionable. This process requires a series of tools, such as charts, graphs, tables, infographics, dashboards, and so on, supported by concise textual explanations to improve understanding and boost retention rate.

Data presentations require us to cull data in a format that allows the presenter to highlight trends, patterns, and insights so that the audience can act upon the shared information. In a few words, the goal of data presentations is to enable viewers to grasp complicated concepts or trends quickly, facilitating informed decision-making or deeper analysis.

Data presentations go beyond the mere usage of graphical elements. Seasoned presenters encompass visuals with the art of data storytelling , so the speech skillfully connects the points through a narrative that resonates with the audience. Depending on the purpose – inspire, persuade, inform, support decision-making processes, etc. – is the data presentation format that is better suited to help us in this journey.

To nail your upcoming data presentation, ensure to count with the following elements:

  • Clear Objectives: Understand the intent of your presentation before selecting the graphical layout and metaphors to make content easier to grasp.
  • Engaging introduction: Use a powerful hook from the get-go. For instance, you can ask a big question or present a problem that your data will answer. Take a look at our guide on how to start a presentation for tips & insights.
  • Structured Narrative: Your data presentation must tell a coherent story. This means a beginning where you present the context, a middle section in which you present the data, and an ending that uses a call-to-action. Check our guide on presentation structure for further information.
  • Visual Elements: These are the charts, graphs, and other elements of visual communication we ought to use to present data. This article will cover one by one the different types of data representation methods we can use, and provide further guidance on choosing between them.
  • Insights and Analysis: This is not just showcasing a graph and letting people get an idea about it. A proper data presentation includes the interpretation of that data, the reason why it’s included, and why it matters to your research.
  • Conclusion & CTA: Ending your presentation with a call to action is necessary. Whether you intend to wow your audience into acquiring your services, inspire them to change the world, or whatever the purpose of your presentation, there must be a stage in which you convey all that you shared and show the path to staying in touch. Plan ahead whether you want to use a thank-you slide, a video presentation, or which method is apt and tailored to the kind of presentation you deliver.
  • Q&A Session: After your speech is concluded, allocate 3-5 minutes for the audience to raise any questions about the information you disclosed. This is an extra chance to establish your authority on the topic. Check our guide on questions and answer sessions in presentations here.

Bar charts are a graphical representation of data using rectangular bars to show quantities or frequencies in an established category. They make it easy for readers to spot patterns or trends. Bar charts can be horizontal or vertical, although the vertical format is commonly known as a column chart. They display categorical, discrete, or continuous variables grouped in class intervals [1] . They include an axis and a set of labeled bars horizontally or vertically. These bars represent the frequencies of variable values or the values themselves. Numbers on the y-axis of a vertical bar chart or the x-axis of a horizontal bar chart are called the scale.

Presentation of the data through bar charts

Real-Life Application of Bar Charts

Let’s say a sales manager is presenting sales to their audience. Using a bar chart, he follows these steps.

Step 1: Selecting Data

The first step is to identify the specific data you will present to your audience.

The sales manager has highlighted these products for the presentation.

  • Product A: Men’s Shoes
  • Product B: Women’s Apparel
  • Product C: Electronics
  • Product D: Home Decor

Step 2: Choosing Orientation

Opt for a vertical layout for simplicity. Vertical bar charts help compare different categories in case there are not too many categories [1] . They can also help show different trends. A vertical bar chart is used where each bar represents one of the four chosen products. After plotting the data, it is seen that the height of each bar directly represents the sales performance of the respective product.

It is visible that the tallest bar (Electronics – Product C) is showing the highest sales. However, the shorter bars (Women’s Apparel – Product B and Home Decor – Product D) need attention. It indicates areas that require further analysis or strategies for improvement.

Step 3: Colorful Insights

Different colors are used to differentiate each product. It is essential to show a color-coded chart where the audience can distinguish between products.

  • Men’s Shoes (Product A): Yellow
  • Women’s Apparel (Product B): Orange
  • Electronics (Product C): Violet
  • Home Decor (Product D): Blue

Accurate bar chart representation of data with a color coded legend

Bar charts are straightforward and easily understandable for presenting data. They are versatile when comparing products or any categorical data [2] . Bar charts adapt seamlessly to retail scenarios. Despite that, bar charts have a few shortcomings. They cannot illustrate data trends over time. Besides, overloading the chart with numerous products can lead to visual clutter, diminishing its effectiveness.

For more information, check our collection of bar chart templates for PowerPoint .

Line graphs help illustrate data trends, progressions, or fluctuations by connecting a series of data points called ‘markers’ with straight line segments. This provides a straightforward representation of how values change [5] . Their versatility makes them invaluable for scenarios requiring a visual understanding of continuous data. In addition, line graphs are also useful for comparing multiple datasets over the same timeline. Using multiple line graphs allows us to compare more than one data set. They simplify complex information so the audience can quickly grasp the ups and downs of values. From tracking stock prices to analyzing experimental results, you can use line graphs to show how data changes over a continuous timeline. They show trends with simplicity and clarity.

Real-life Application of Line Graphs

To understand line graphs thoroughly, we will use a real case. Imagine you’re a financial analyst presenting a tech company’s monthly sales for a licensed product over the past year. Investors want insights into sales behavior by month, how market trends may have influenced sales performance and reception to the new pricing strategy. To present data via a line graph, you will complete these steps.

First, you need to gather the data. In this case, your data will be the sales numbers. For example:

  • January: $45,000
  • February: $55,000
  • March: $45,000
  • April: $60,000
  • May: $ 70,000
  • June: $65,000
  • July: $62,000
  • August: $68,000
  • September: $81,000
  • October: $76,000
  • November: $87,000
  • December: $91,000

After choosing the data, the next step is to select the orientation. Like bar charts, you can use vertical or horizontal line graphs. However, we want to keep this simple, so we will keep the timeline (x-axis) horizontal while the sales numbers (y-axis) vertical.

Step 3: Connecting Trends

After adding the data to your preferred software, you will plot a line graph. In the graph, each month’s sales are represented by data points connected by a line.

Line graph in data presentation

Step 4: Adding Clarity with Color

If there are multiple lines, you can also add colors to highlight each one, making it easier to follow.

Line graphs excel at visually presenting trends over time. These presentation aids identify patterns, like upward or downward trends. However, too many data points can clutter the graph, making it harder to interpret. Line graphs work best with continuous data but are not suitable for categories.

For more information, check our collection of line chart templates for PowerPoint and our article about how to make a presentation graph .

A data dashboard is a visual tool for analyzing information. Different graphs, charts, and tables are consolidated in a layout to showcase the information required to achieve one or more objectives. Dashboards help quickly see Key Performance Indicators (KPIs). You don’t make new visuals in the dashboard; instead, you use it to display visuals you’ve already made in worksheets [3] .

Keeping the number of visuals on a dashboard to three or four is recommended. Adding too many can make it hard to see the main points [4]. Dashboards can be used for business analytics to analyze sales, revenue, and marketing metrics at a time. They are also used in the manufacturing industry, as they allow users to grasp the entire production scenario at the moment while tracking the core KPIs for each line.

Real-Life Application of a Dashboard

Consider a project manager presenting a software development project’s progress to a tech company’s leadership team. He follows the following steps.

Step 1: Defining Key Metrics

To effectively communicate the project’s status, identify key metrics such as completion status, budget, and bug resolution rates. Then, choose measurable metrics aligned with project objectives.

Step 2: Choosing Visualization Widgets

After finalizing the data, presentation aids that align with each metric are selected. For this project, the project manager chooses a progress bar for the completion status and uses bar charts for budget allocation. Likewise, he implements line charts for bug resolution rates.

Data analysis presentation example

Step 3: Dashboard Layout

Key metrics are prominently placed in the dashboard for easy visibility, and the manager ensures that it appears clean and organized.

Dashboards provide a comprehensive view of key project metrics. Users can interact with data, customize views, and drill down for detailed analysis. However, creating an effective dashboard requires careful planning to avoid clutter. Besides, dashboards rely on the availability and accuracy of underlying data sources.

For more information, check our article on how to design a dashboard presentation , and discover our collection of dashboard PowerPoint templates .

Treemap charts represent hierarchical data structured in a series of nested rectangles [6] . As each branch of the ‘tree’ is given a rectangle, smaller tiles can be seen representing sub-branches, meaning elements on a lower hierarchical level than the parent rectangle. Each one of those rectangular nodes is built by representing an area proportional to the specified data dimension.

Treemaps are useful for visualizing large datasets in compact space. It is easy to identify patterns, such as which categories are dominant. Common applications of the treemap chart are seen in the IT industry, such as resource allocation, disk space management, website analytics, etc. Also, they can be used in multiple industries like healthcare data analysis, market share across different product categories, or even in finance to visualize portfolios.

Real-Life Application of a Treemap Chart

Let’s consider a financial scenario where a financial team wants to represent the budget allocation of a company. There is a hierarchy in the process, so it is helpful to use a treemap chart. In the chart, the top-level rectangle could represent the total budget, and it would be subdivided into smaller rectangles, each denoting a specific department. Further subdivisions within these smaller rectangles might represent individual projects or cost categories.

Step 1: Define Your Data Hierarchy

While presenting data on the budget allocation, start by outlining the hierarchical structure. The sequence will be like the overall budget at the top, followed by departments, projects within each department, and finally, individual cost categories for each project.

  • Top-level rectangle: Total Budget
  • Second-level rectangles: Departments (Engineering, Marketing, Sales)
  • Third-level rectangles: Projects within each department
  • Fourth-level rectangles: Cost categories for each project (Personnel, Marketing Expenses, Equipment)

Step 2: Choose a Suitable Tool

It’s time to select a data visualization tool supporting Treemaps. Popular choices include Tableau, Microsoft Power BI, PowerPoint, or even coding with libraries like D3.js. It is vital to ensure that the chosen tool provides customization options for colors, labels, and hierarchical structures.

Here, the team uses PowerPoint for this guide because of its user-friendly interface and robust Treemap capabilities.

Step 3: Make a Treemap Chart with PowerPoint

After opening the PowerPoint presentation, they chose “SmartArt” to form the chart. The SmartArt Graphic window has a “Hierarchy” category on the left.  Here, you will see multiple options. You can choose any layout that resembles a Treemap. The “Table Hierarchy” or “Organization Chart” options can be adapted. The team selects the Table Hierarchy as it looks close to a Treemap.

Step 5: Input Your Data

After that, a new window will open with a basic structure. They add the data one by one by clicking on the text boxes. They start with the top-level rectangle, representing the total budget.  

Treemap used for presenting data

Step 6: Customize the Treemap

By clicking on each shape, they customize its color, size, and label. At the same time, they can adjust the font size, style, and color of labels by using the options in the “Format” tab in PowerPoint. Using different colors for each level enhances the visual difference.

Treemaps excel at illustrating hierarchical structures. These charts make it easy to understand relationships and dependencies. They efficiently use space, compactly displaying a large amount of data, reducing the need for excessive scrolling or navigation. Additionally, using colors enhances the understanding of data by representing different variables or categories.

In some cases, treemaps might become complex, especially with deep hierarchies.  It becomes challenging for some users to interpret the chart. At the same time, displaying detailed information within each rectangle might be constrained by space. It potentially limits the amount of data that can be shown clearly. Without proper labeling and color coding, there’s a risk of misinterpretation.

A heatmap is a data visualization tool that uses color coding to represent values across a two-dimensional surface. In these, colors replace numbers to indicate the magnitude of each cell. This color-shaded matrix display is valuable for summarizing and understanding data sets with a glance [7] . The intensity of the color corresponds to the value it represents, making it easy to identify patterns, trends, and variations in the data.

As a tool, heatmaps help businesses analyze website interactions, revealing user behavior patterns and preferences to enhance overall user experience. In addition, companies use heatmaps to assess content engagement, identifying popular sections and areas of improvement for more effective communication. They excel at highlighting patterns and trends in large datasets, making it easy to identify areas of interest.

We can implement heatmaps to express multiple data types, such as numerical values, percentages, or even categorical data. Heatmaps help us easily spot areas with lots of activity, making them helpful in figuring out clusters [8] . When making these maps, it is important to pick colors carefully. The colors need to show the differences between groups or levels of something. And it is good to use colors that people with colorblindness can easily see.

Check our detailed guide on how to create a heatmap here. Also discover our collection of heatmap PowerPoint templates .

Pie charts are circular statistical graphics divided into slices to illustrate numerical proportions. Each slice represents a proportionate part of the whole, making it easy to visualize the contribution of each component to the total.

The size of the pie charts is influenced by the value of data points within each pie. The total of all data points in a pie determines its size. The pie with the highest data points appears as the largest, whereas the others are proportionally smaller. However, you can present all pies of the same size if proportional representation is not required [9] . Sometimes, pie charts are difficult to read, or additional information is required. A variation of this tool can be used instead, known as the donut chart , which has the same structure but a blank center, creating a ring shape. Presenters can add extra information, and the ring shape helps to declutter the graph.

Pie charts are used in business to show percentage distribution, compare relative sizes of categories, or present straightforward data sets where visualizing ratios is essential.

Real-Life Application of Pie Charts

Consider a scenario where you want to represent the distribution of the data. Each slice of the pie chart would represent a different category, and the size of each slice would indicate the percentage of the total portion allocated to that category.

Step 1: Define Your Data Structure

Imagine you are presenting the distribution of a project budget among different expense categories.

  • Column A: Expense Categories (Personnel, Equipment, Marketing, Miscellaneous)
  • Column B: Budget Amounts ($40,000, $30,000, $20,000, $10,000) Column B represents the values of your categories in Column A.

Step 2: Insert a Pie Chart

Using any of the accessible tools, you can create a pie chart. The most convenient tools for forming a pie chart in a presentation are presentation tools such as PowerPoint or Google Slides.  You will notice that the pie chart assigns each expense category a percentage of the total budget by dividing it by the total budget.

For instance:

  • Personnel: $40,000 / ($40,000 + $30,000 + $20,000 + $10,000) = 40%
  • Equipment: $30,000 / ($40,000 + $30,000 + $20,000 + $10,000) = 30%
  • Marketing: $20,000 / ($40,000 + $30,000 + $20,000 + $10,000) = 20%
  • Miscellaneous: $10,000 / ($40,000 + $30,000 + $20,000 + $10,000) = 10%

You can make a chart out of this or just pull out the pie chart from the data.

Pie chart template in data presentation

3D pie charts and 3D donut charts are quite popular among the audience. They stand out as visual elements in any presentation slide, so let’s take a look at how our pie chart example would look in 3D pie chart format.

3D pie chart in data presentation

Step 03: Results Interpretation

The pie chart visually illustrates the distribution of the project budget among different expense categories. Personnel constitutes the largest portion at 40%, followed by equipment at 30%, marketing at 20%, and miscellaneous at 10%. This breakdown provides a clear overview of where the project funds are allocated, which helps in informed decision-making and resource management. It is evident that personnel are a significant investment, emphasizing their importance in the overall project budget.

Pie charts provide a straightforward way to represent proportions and percentages. They are easy to understand, even for individuals with limited data analysis experience. These charts work well for small datasets with a limited number of categories.

However, a pie chart can become cluttered and less effective in situations with many categories. Accurate interpretation may be challenging, especially when dealing with slight differences in slice sizes. In addition, these charts are static and do not effectively convey trends over time.

For more information, check our collection of pie chart templates for PowerPoint .

Histograms present the distribution of numerical variables. Unlike a bar chart that records each unique response separately, histograms organize numeric responses into bins and show the frequency of reactions within each bin [10] . The x-axis of a histogram shows the range of values for a numeric variable. At the same time, the y-axis indicates the relative frequencies (percentage of the total counts) for that range of values.

Whenever you want to understand the distribution of your data, check which values are more common, or identify outliers, histograms are your go-to. Think of them as a spotlight on the story your data is telling. A histogram can provide a quick and insightful overview if you’re curious about exam scores, sales figures, or any numerical data distribution.

Real-Life Application of a Histogram

In the histogram data analysis presentation example, imagine an instructor analyzing a class’s grades to identify the most common score range. A histogram could effectively display the distribution. It will show whether most students scored in the average range or if there are significant outliers.

Step 1: Gather Data

He begins by gathering the data. The scores of each student in class are gathered to analyze exam scores.

After arranging the scores in ascending order, bin ranges are set.

Step 2: Define Bins

Bins are like categories that group similar values. Think of them as buckets that organize your data. The presenter decides how wide each bin should be based on the range of the values. For instance, the instructor sets the bin ranges based on score intervals: 60-69, 70-79, 80-89, and 90-100.

Step 3: Count Frequency

Now, he counts how many data points fall into each bin. This step is crucial because it tells you how often specific ranges of values occur. The result is the frequency distribution, showing the occurrences of each group.

Here, the instructor counts the number of students in each category.

  • 60-69: 1 student (Kate)
  • 70-79: 4 students (David, Emma, Grace, Jack)
  • 80-89: 7 students (Alice, Bob, Frank, Isabel, Liam, Mia, Noah)
  • 90-100: 3 students (Clara, Henry, Olivia)

Step 4: Create the Histogram

It’s time to turn the data into a visual representation. Draw a bar for each bin on a graph. The width of the bar should correspond to the range of the bin, and the height should correspond to the frequency.  To make your histogram understandable, label the X and Y axes.

In this case, the X-axis should represent the bins (e.g., test score ranges), and the Y-axis represents the frequency.

Histogram in Data Presentation

The histogram of the class grades reveals insightful patterns in the distribution. Most students, with seven students, fall within the 80-89 score range. The histogram provides a clear visualization of the class’s performance. It showcases a concentration of grades in the upper-middle range with few outliers at both ends. This analysis helps in understanding the overall academic standing of the class. It also identifies the areas for potential improvement or recognition.

Thus, histograms provide a clear visual representation of data distribution. They are easy to interpret, even for those without a statistical background. They apply to various types of data, including continuous and discrete variables. One weak point is that histograms do not capture detailed patterns in students’ data, with seven compared to other visualization methods.

A scatter plot is a graphical representation of the relationship between two variables. It consists of individual data points on a two-dimensional plane. This plane plots one variable on the x-axis and the other on the y-axis. Each point represents a unique observation. It visualizes patterns, trends, or correlations between the two variables.

Scatter plots are also effective in revealing the strength and direction of relationships. They identify outliers and assess the overall distribution of data points. The points’ dispersion and clustering reflect the relationship’s nature, whether it is positive, negative, or lacks a discernible pattern. In business, scatter plots assess relationships between variables such as marketing cost and sales revenue. They help present data correlations and decision-making.

Real-Life Application of Scatter Plot

A group of scientists is conducting a study on the relationship between daily hours of screen time and sleep quality. After reviewing the data, they managed to create this table to help them build a scatter plot graph:

In the provided example, the x-axis represents Daily Hours of Screen Time, and the y-axis represents the Sleep Quality Rating.

Scatter plot in data presentation

The scientists observe a negative correlation between the amount of screen time and the quality of sleep. This is consistent with their hypothesis that blue light, especially before bedtime, has a significant impact on sleep quality and metabolic processes.

There are a few things to remember when using a scatter plot. Even when a scatter diagram indicates a relationship, it doesn’t mean one variable affects the other. A third factor can influence both variables. The more the plot resembles a straight line, the stronger the relationship is perceived [11] . If it suggests no ties, the observed pattern might be due to random fluctuations in data. When the scatter diagram depicts no correlation, whether the data might be stratified is worth considering.

Choosing the appropriate data presentation type is crucial when making a presentation . Understanding the nature of your data and the message you intend to convey will guide this selection process. For instance, when showcasing quantitative relationships, scatter plots become instrumental in revealing correlations between variables. If the focus is on emphasizing parts of a whole, pie charts offer a concise display of proportions. Histograms, on the other hand, prove valuable for illustrating distributions and frequency patterns. 

Bar charts provide a clear visual comparison of different categories. Likewise, line charts excel in showcasing trends over time, while tables are ideal for detailed data examination. Starting a presentation on data presentation types involves evaluating the specific information you want to communicate and selecting the format that aligns with your message. This ensures clarity and resonance with your audience from the beginning of your presentation.

1. Fact Sheet Dashboard for Data Presentation

what is data presentation and interpretation in research

Convey all the data you need to present in this one-pager format, an ideal solution tailored for users looking for presentation aids. Global maps, donut chats, column graphs, and text neatly arranged in a clean layout presented in light and dark themes.

Use This Template

2. 3D Column Chart Infographic PPT Template

what is data presentation and interpretation in research

Represent column charts in a highly visual 3D format with this PPT template. A creative way to present data, this template is entirely editable, and we can craft either a one-page infographic or a series of slides explaining what we intend to disclose point by point.

3. Data Circles Infographic PowerPoint Template

what is data presentation and interpretation in research

An alternative to the pie chart and donut chart diagrams, this template features a series of curved shapes with bubble callouts as ways of presenting data. Expand the information for each arch in the text placeholder areas.

4. Colorful Metrics Dashboard for Data Presentation

what is data presentation and interpretation in research

This versatile dashboard template helps us in the presentation of the data by offering several graphs and methods to convert numbers into graphics. Implement it for e-commerce projects, financial projections, project development, and more.

5. Animated Data Presentation Tools for PowerPoint & Google Slides

Canvas Shape Tree Diagram Template

A slide deck filled with most of the tools mentioned in this article, from bar charts, column charts, treemap graphs, pie charts, histogram, etc. Animated effects make each slide look dynamic when sharing data with stakeholders.

6. Statistics Waffle Charts PPT Template for Data Presentations

what is data presentation and interpretation in research

This PPT template helps us how to present data beyond the typical pie chart representation. It is widely used for demographics, so it’s a great fit for marketing teams, data science professionals, HR personnel, and more.

7. Data Presentation Dashboard Template for Google Slides

what is data presentation and interpretation in research

A compendium of tools in dashboard format featuring line graphs, bar charts, column charts, and neatly arranged placeholder text areas. 

8. Weather Dashboard for Data Presentation

what is data presentation and interpretation in research

Share weather data for agricultural presentation topics, environmental studies, or any kind of presentation that requires a highly visual layout for weather forecasting on a single day. Two color themes are available.

9. Social Media Marketing Dashboard Data Presentation Template

what is data presentation and interpretation in research

Intended for marketing professionals, this dashboard template for data presentation is a tool for presenting data analytics from social media channels. Two slide layouts featuring line graphs and column charts.

10. Project Management Summary Dashboard Template

what is data presentation and interpretation in research

A tool crafted for project managers to deliver highly visual reports on a project’s completion, the profits it delivered for the company, and expenses/time required to execute it. 4 different color layouts are available.

11. Profit & Loss Dashboard for PowerPoint and Google Slides

what is data presentation and interpretation in research

A must-have for finance professionals. This typical profit & loss dashboard includes progress bars, donut charts, column charts, line graphs, and everything that’s required to deliver a comprehensive report about a company’s financial situation.

Overwhelming visuals

One of the mistakes related to using data-presenting methods is including too much data or using overly complex visualizations. They can confuse the audience and dilute the key message.

Inappropriate chart types

Choosing the wrong type of chart for the data at hand can lead to misinterpretation. For example, using a pie chart for data that doesn’t represent parts of a whole is not right.

Lack of context

Failing to provide context or sufficient labeling can make it challenging for the audience to understand the significance of the presented data.

Inconsistency in design

Using inconsistent design elements and color schemes across different visualizations can create confusion and visual disarray.

Failure to provide details

Simply presenting raw data without offering clear insights or takeaways can leave the audience without a meaningful conclusion.

Lack of focus

Not having a clear focus on the key message or main takeaway can result in a presentation that lacks a central theme.

Visual accessibility issues

Overlooking the visual accessibility of charts and graphs can exclude certain audience members who may have difficulty interpreting visual information.

In order to avoid these mistakes in data presentation, presenters can benefit from using presentation templates . These templates provide a structured framework. They ensure consistency, clarity, and an aesthetically pleasing design, enhancing data communication’s overall impact.

Understanding and choosing data presentation types are pivotal in effective communication. Each method serves a unique purpose, so selecting the appropriate one depends on the nature of the data and the message to be conveyed. The diverse array of presentation types offers versatility in visually representing information, from bar charts showing values to pie charts illustrating proportions. 

Using the proper method enhances clarity, engages the audience, and ensures that data sets are not just presented but comprehensively understood. By appreciating the strengths and limitations of different presentation types, communicators can tailor their approach to convey information accurately, developing a deeper connection between data and audience understanding.

[1] Government of Canada, S.C. (2021) 5 Data Visualization 5.2 Bar Chart , 5.2 Bar chart .  https://www150.statcan.gc.ca/n1/edu/power-pouvoir/ch9/bargraph-diagrammeabarres/5214818-eng.htm

[2] Kosslyn, S.M., 1989. Understanding charts and graphs. Applied cognitive psychology, 3(3), pp.185-225. https://apps.dtic.mil/sti/pdfs/ADA183409.pdf

[3] Creating a Dashboard . https://it.tufts.edu/book/export/html/1870

[4] https://www.goldenwestcollege.edu/research/data-and-more/data-dashboards/index.html

[5] https://www.mit.edu/course/21/21.guide/grf-line.htm

[6] Jadeja, M. and Shah, K., 2015, January. Tree-Map: A Visualization Tool for Large Data. In GSB@ SIGIR (pp. 9-13). https://ceur-ws.org/Vol-1393/gsb15proceedings.pdf#page=15

[7] Heat Maps and Quilt Plots. https://www.publichealth.columbia.edu/research/population-health-methods/heat-maps-and-quilt-plots

[8] EIU QGIS WORKSHOP. https://www.eiu.edu/qgisworkshop/heatmaps.php

[9] About Pie Charts.  https://www.mit.edu/~mbarker/formula1/f1help/11-ch-c8.htm

[10] Histograms. https://sites.utexas.edu/sos/guided/descriptive/numericaldd/descriptiven2/histogram/ [11] https://asq.org/quality-resources/scatter-diagram

what is data presentation and interpretation in research

Like this article? Please share

Data Analysis, Data Science, Data Visualization Filed under Design

Related Articles

How to Make a Presentation Graph

Filed under Design • March 27th, 2024

How to Make a Presentation Graph

Detailed step-by-step instructions to master the art of how to make a presentation graph in PowerPoint and Google Slides. Check it out!

All About Using Harvey Balls

Filed under Presentation Ideas • January 6th, 2024

All About Using Harvey Balls

Among the many tools in the arsenal of the modern presenter, Harvey Balls have a special place. In this article we will tell you all about using Harvey Balls.

How to Design a Dashboard Presentation: A Step-by-Step Guide

Filed under Business • December 8th, 2023

How to Design a Dashboard Presentation: A Step-by-Step Guide

Take a step further in your professional presentation skills by learning what a dashboard presentation is and how to properly design one in PowerPoint. A detailed step-by-step guide is here!

Leave a Reply

what is data presentation and interpretation in research

A Guide To The Methods, Benefits & Problems of The Interpretation of Data

Data interpretation blog post by datapine

Table of Contents

1) What Is Data Interpretation?

2) How To Interpret Data?

3) Why Data Interpretation Is Important?

4) Data Interpretation Skills

5) Data Analysis & Interpretation Problems

6) Data Interpretation Techniques & Methods

7) The Use of Dashboards For Data Interpretation

8) Business Data Interpretation Examples

Data analysis and interpretation have now taken center stage with the advent of the digital age… and the sheer amount of data can be frightening. In fact, a Digital Universe study found that the total data supply in 2012 was 2.8 trillion gigabytes! Based on that amount of data alone, it is clear the calling card of any successful enterprise in today’s global world will be the ability to analyze complex data, produce actionable insights, and adapt to new market needs… all at the speed of thought.

Business dashboards are the digital age tools for big data. Capable of displaying key performance indicators (KPIs) for both quantitative and qualitative data analyses, they are ideal for making the fast-paced and data-driven market decisions that push today’s industry leaders to sustainable success. Through the art of streamlined visual communication, data dashboards permit businesses to engage in real-time and informed decision-making and are key instruments in data interpretation. First of all, let’s find a definition to understand what lies behind this practice.

What Is Data Interpretation?

Data interpretation refers to the process of using diverse analytical methods to review data and arrive at relevant conclusions. The interpretation of data helps researchers to categorize, manipulate, and summarize the information in order to answer critical questions.

The importance of data interpretation is evident, and this is why it needs to be done properly. Data is very likely to arrive from multiple sources and has a tendency to enter the analysis process with haphazard ordering. Data analysis tends to be extremely subjective. That is to say, the nature and goal of interpretation will vary from business to business, likely correlating to the type of data being analyzed. While there are several types of processes that are implemented based on the nature of individual data, the two broadest and most common categories are “quantitative and qualitative analysis.”

Yet, before any serious data interpretation inquiry can begin, it should be understood that visual presentations of data findings are irrelevant unless a sound decision is made regarding measurement scales. Before any serious data analysis can begin, the measurement scale must be decided for the data as this will have a long-term impact on data interpretation ROI. The varying scales include:

  • Nominal Scale: non-numeric categories that cannot be ranked or compared quantitatively. Variables are exclusive and exhaustive.
  • Ordinal Scale: exclusive categories that are exclusive and exhaustive but with a logical order. Quality ratings and agreement ratings are examples of ordinal scales (i.e., good, very good, fair, etc., OR agree, strongly agree, disagree, etc.).
  • Interval: a measurement scale where data is grouped into categories with orderly and equal distances between the categories. There is always an arbitrary zero point.
  • Ratio: contains features of all three.

For a more in-depth review of scales of measurement, read our article on data analysis questions . Once measurement scales have been selected, it is time to select which of the two broad interpretation processes will best suit your data needs. Let’s take a closer look at those specific methods and possible data interpretation problems.

How To Interpret Data? Top Methods & Techniques

Illustration of data interpretation on blackboard

When interpreting data, an analyst must try to discern the differences between correlation, causation, and coincidences, as well as many other biases – but he also has to consider all the factors involved that may have led to a result. There are various data interpretation types and methods one can use to achieve this.

The interpretation of data is designed to help people make sense of numerical data that has been collected, analyzed, and presented. Having a baseline method for interpreting data will provide your analyst teams with a structure and consistent foundation. Indeed, if several departments have different approaches to interpreting the same data while sharing the same goals, some mismatched objectives can result. Disparate methods will lead to duplicated efforts, inconsistent solutions, wasted energy, and inevitably – time and money. In this part, we will look at the two main methods of interpretation of data: qualitative and quantitative analysis.

Qualitative Data Interpretation

Qualitative data analysis can be summed up in one word – categorical. With this type of analysis, data is not described through numerical values or patterns but through the use of descriptive context (i.e., text). Typically, narrative data is gathered by employing a wide variety of person-to-person techniques. These techniques include:

  • Observations: detailing behavioral patterns that occur within an observation group. These patterns could be the amount of time spent in an activity, the type of activity, and the method of communication employed.
  • Focus groups: Group people and ask them relevant questions to generate a collaborative discussion about a research topic.
  • Secondary Research: much like how patterns of behavior can be observed, various types of documentation resources can be coded and divided based on the type of material they contain.
  • Interviews: one of the best collection methods for narrative data. Inquiry responses can be grouped by theme, topic, or category. The interview approach allows for highly focused data segmentation.

A key difference between qualitative and quantitative analysis is clearly noticeable in the interpretation stage. The first one is widely open to interpretation and must be “coded” so as to facilitate the grouping and labeling of data into identifiable themes. As person-to-person data collection techniques can often result in disputes pertaining to proper analysis, qualitative data analysis is often summarized through three basic principles: notice things, collect things, and think about things.

After qualitative data has been collected through transcripts, questionnaires, audio and video recordings, or the researcher’s notes, it is time to interpret it. For that purpose, there are some common methods used by researchers and analysts.

  • Content analysis : As its name suggests, this is a research method used to identify frequencies and recurring words, subjects, and concepts in image, video, or audio content. It transforms qualitative information into quantitative data to help discover trends and conclusions that will later support important research or business decisions. This method is often used by marketers to understand brand sentiment from the mouths of customers themselves. Through that, they can extract valuable information to improve their products and services. It is recommended to use content analytics tools for this method as manually performing it is very time-consuming and can lead to human error or subjectivity issues. Having a clear goal in mind before diving into it is another great practice for avoiding getting lost in the fog.  
  • Thematic analysis: This method focuses on analyzing qualitative data, such as interview transcripts, survey questions, and others, to identify common patterns and separate the data into different groups according to found similarities or themes. For example, imagine you want to analyze what customers think about your restaurant. For this purpose, you do a thematic analysis on 1000 reviews and find common themes such as “fresh food”, “cold food”, “small portions”, “friendly staff”, etc. With those recurring themes in hand, you can extract conclusions about what could be improved or enhanced based on your customer’s experiences. Since this technique is more exploratory, be open to changing your research questions or goals as you go. 
  • Narrative analysis: A bit more specific and complicated than the two previous methods, it is used to analyze stories and discover their meaning. These stories can be extracted from testimonials, case studies, and interviews, as these formats give people more space to tell their experiences. Given that collecting this kind of data is harder and more time-consuming, sample sizes for narrative analysis are usually smaller, which makes it harder to reproduce its findings. However, it is still a valuable technique for understanding customers' preferences and mindsets.  
  • Discourse analysis : This method is used to draw the meaning of any type of visual, written, or symbolic language in relation to a social, political, cultural, or historical context. It is used to understand how context can affect how language is carried out and understood. For example, if you are doing research on power dynamics, using discourse analysis to analyze a conversation between a janitor and a CEO and draw conclusions about their responses based on the context and your research questions is a great use case for this technique. That said, like all methods in this section, discourse analytics is time-consuming as the data needs to be analyzed until no new insights emerge.  
  • Grounded theory analysis : The grounded theory approach aims to create or discover a new theory by carefully testing and evaluating the data available. Unlike all other qualitative approaches on this list, grounded theory helps extract conclusions and hypotheses from the data instead of going into the analysis with a defined hypothesis. This method is very popular amongst researchers, analysts, and marketers as the results are completely data-backed, providing a factual explanation of any scenario. It is often used when researching a completely new topic or with little knowledge as this space to start from the ground up. 

Quantitative Data Interpretation

If quantitative data interpretation could be summed up in one word (and it really can’t), that word would be “numerical.” There are few certainties when it comes to data analysis, but you can be sure that if the research you are engaging in has no numbers involved, it is not quantitative research, as this analysis refers to a set of processes by which numerical data is analyzed. More often than not, it involves the use of statistical modeling such as standard deviation, mean, and median. Let’s quickly review the most common statistical terms:

  • Mean: A mean represents a numerical average for a set of responses. When dealing with a data set (or multiple data sets), a mean will represent the central value of a specific set of numbers. It is the sum of the values divided by the number of values within the data set. Other terms that can be used to describe the concept are arithmetic mean, average, and mathematical expectation.
  • Standard deviation: This is another statistical term commonly used in quantitative analysis. Standard deviation reveals the distribution of the responses around the mean. It describes the degree of consistency within the responses; together with the mean, it provides insight into data sets.
  • Frequency distribution: This is a measurement gauging the rate of a response appearance within a data set. When using a survey, for example, frequency distribution, it can determine the number of times a specific ordinal scale response appears (i.e., agree, strongly agree, disagree, etc.). Frequency distribution is extremely keen in determining the degree of consensus among data points.

Typically, quantitative data is measured by visually presenting correlation tests between two or more variables of significance. Different processes can be used together or separately, and comparisons can be made to ultimately arrive at a conclusion. Other signature interpretation processes of quantitative data include:

  • Regression analysis: Essentially, it uses historical data to understand the relationship between a dependent variable and one or more independent variables. Knowing which variables are related and how they developed in the past allows you to anticipate possible outcomes and make better decisions going forward. For example, if you want to predict your sales for next month, you can use regression to understand what factors will affect them, such as products on sale and the launch of a new campaign, among many others. 
  • Cohort analysis: This method identifies groups of users who share common characteristics during a particular time period. In a business scenario, cohort analysis is commonly used to understand customer behaviors. For example, a cohort could be all users who have signed up for a free trial on a given day. An analysis would be carried out to see how these users behave, what actions they carry out, and how their behavior differs from other user groups.
  • Predictive analysis: As its name suggests, the predictive method aims to predict future developments by analyzing historical and current data. Powered by technologies such as artificial intelligence and machine learning, predictive analytics practices enable businesses to identify patterns or potential issues and plan informed strategies in advance.
  • Prescriptive analysis: Also powered by predictions, the prescriptive method uses techniques such as graph analysis, complex event processing, and neural networks, among others, to try to unravel the effect that future decisions will have in order to adjust them before they are actually made. This helps businesses to develop responsive, practical business strategies.
  • Conjoint analysis: Typically applied to survey analysis, the conjoint approach is used to analyze how individuals value different attributes of a product or service. This helps researchers and businesses to define pricing, product features, packaging, and many other attributes. A common use is menu-based conjoint analysis, in which individuals are given a “menu” of options from which they can build their ideal concept or product. Through this, analysts can understand which attributes they would pick above others and drive conclusions.
  • Cluster analysis: Last but not least, the cluster is a method used to group objects into categories. Since there is no target variable when using cluster analysis, it is a useful method to find hidden trends and patterns in the data. In a business context, clustering is used for audience segmentation to create targeted experiences. In market research, it is often used to identify age groups, geographical information, and earnings, among others.

Now that we have seen how to interpret data, let's move on and ask ourselves some questions: What are some of the benefits of data interpretation? Why do all industries engage in data research and analysis? These are basic questions, but they often don’t receive adequate attention.

Your Chance: Want to test a powerful data analysis software? Use our 14-days free trial & start extracting insights from your data!

Why Data Interpretation Is Important

illustrating quantitative data interpretation with charts & graphs

The purpose of collection and interpretation is to acquire useful and usable information and to make the most informed decisions possible. From businesses to newlyweds researching their first home, data collection and interpretation provide limitless benefits for a wide range of institutions and individuals.

Data analysis and interpretation, regardless of the method and qualitative/quantitative status, may include the following characteristics:

  • Data identification and explanation
  • Comparing and contrasting data
  • Identification of data outliers
  • Future predictions

Data analysis and interpretation, in the end, help improve processes and identify problems. It is difficult to grow and make dependable improvements without, at the very least, minimal data collection and interpretation. What is the keyword? Dependable. Vague ideas regarding performance enhancement exist within all institutions and industries. Yet, without proper research and analysis, an idea is likely to remain in a stagnant state forever (i.e., minimal growth). So… what are a few of the business benefits of digital age data analysis and interpretation? Let’s take a look!

1) Informed decision-making: A decision is only as good as the knowledge that formed it. Informed data decision-making can potentially set industry leaders apart from the rest of the market pack. Studies have shown that companies in the top third of their industries are, on average, 5% more productive and 6% more profitable when implementing informed data decision-making processes. Most decisive actions will arise only after a problem has been identified or a goal defined. Data analysis should include identification, thesis development, and data collection, followed by data communication.

If institutions only follow that simple order, one that we should all be familiar with from grade school science fairs, then they will be able to solve issues as they emerge in real-time. Informed decision-making has a tendency to be cyclical. This means there is really no end, and eventually, new questions and conditions arise within the process that need to be studied further. The monitoring of data results will inevitably return the process to the start with new data and sights.

2) Anticipating needs with trends identification: data insights provide knowledge, and knowledge is power. The insights obtained from market and consumer data analyses have the ability to set trends for peers within similar market segments. A perfect example of how data analytics can impact trend prediction is evidenced in the music identification application Shazam . The application allows users to upload an audio clip of a song they like but can’t seem to identify. Users make 15 million song identifications a day. With this data, Shazam has been instrumental in predicting future popular artists.

When industry trends are identified, they can then serve a greater industry purpose. For example, the insights from Shazam’s monitoring benefits not only Shazam in understanding how to meet consumer needs but also grant music executives and record label companies an insight into the pop-culture scene of the day. Data gathering and interpretation processes can allow for industry-wide climate prediction and result in greater revenue streams across the market. For this reason, all institutions should follow the basic data cycle of collection, interpretation, decision-making, and monitoring.

3) Cost efficiency: Proper implementation of analytics processes can provide businesses with profound cost advantages within their industries. A recent data study performed by Deloitte vividly demonstrates this in finding that data analysis ROI is driven by efficient cost reductions. Often, this benefit is overlooked because making money is typically viewed as “sexier” than saving money. Yet, sound data analyses have the ability to alert management to cost-reduction opportunities without any significant exertion of effort on the part of human capital.

A great example of the potential for cost efficiency through data analysis is Intel. Prior to 2012, Intel would conduct over 19,000 manufacturing function tests on their chips before they could be deemed acceptable for release. To cut costs and reduce test time, Intel implemented predictive data analyses. By using historical and current data, Intel now avoids testing each chip 19,000 times by focusing on specific and individual chip tests. After its implementation in 2012, Intel saved over $3 million in manufacturing costs. Cost reduction may not be as “sexy” as data profit, but as Intel proves, it is a benefit of data analysis that should not be neglected.

4) Clear foresight: companies that collect and analyze their data gain better knowledge about themselves, their processes, and their performance. They can identify performance challenges when they arise and take action to overcome them. Data interpretation through visual representations lets them process their findings faster and make better-informed decisions on the company's future.

Key Data Interpretation Skills You Should Have

Just like any other process, data interpretation and analysis require researchers or analysts to have some key skills to be able to perform successfully. It is not enough just to apply some methods and tools to the data; the person who is managing it needs to be objective and have a data-driven mind, among other skills. 

It is a common misconception to think that the required skills are mostly number-related. While data interpretation is heavily analytically driven, it also requires communication and narrative skills, as the results of the analysis need to be presented in a way that is easy to understand for all types of audiences. 

Luckily, with the rise of self-service tools and AI-driven technologies, data interpretation is no longer segregated for analysts only. However, the topic still remains a big challenge for businesses that make big investments in data and tools to support it, as the interpretation skills required are still lacking. It is worthless to put massive amounts of money into extracting information if you are not going to be able to interpret what that information is telling you. For that reason, below we list the top 5 data interpretation skills your employees or researchers should have to extract the maximum potential from the data. 

  • Data Literacy: The first and most important skill to have is data literacy. This means having the ability to understand, work, and communicate with data. It involves knowing the types of data sources, methods, and ethical implications of using them. In research, this skill is often a given. However, in a business context, there might be many employees who are not comfortable with data. The issue is the interpretation of data can not be solely responsible for the data team, as it is not sustainable in the long run. Experts advise business leaders to carefully assess the literacy level across their workforce and implement training instances to ensure everyone can interpret their data. 
  • Data Tools: The data interpretation and analysis process involves using various tools to collect, clean, store, and analyze the data. The complexity of the tools varies depending on the type of data and the analysis goals. Going from simple ones like Excel to more complex ones like databases, such as SQL, or programming languages, such as R or Python. It also involves visual analytics tools to bring the data to life through the use of graphs and charts. Managing these tools is a fundamental skill as they make the process faster and more efficient. As mentioned before, most modern solutions are now self-service, enabling less technical users to use them without problem.
  • Critical Thinking: Another very important skill is to have critical thinking. Data hides a range of conclusions, trends, and patterns that must be discovered. It is not just about comparing numbers; it is about putting a story together based on multiple factors that will lead to a conclusion. Therefore, having the ability to look further from what is right in front of you is an invaluable skill for data interpretation. 
  • Data Ethics: In the information age, being aware of the legal and ethical responsibilities that come with the use of data is of utmost importance. In short, data ethics involves respecting the privacy and confidentiality of data subjects, as well as ensuring accuracy and transparency for data usage. It requires the analyzer or researcher to be completely objective with its interpretation to avoid any biases or discrimination. Many countries have already implemented regulations regarding the use of data, including the GDPR or the ACM Code Of Ethics. Awareness of these regulations and responsibilities is a fundamental skill that anyone working in data interpretation should have. 
  • Domain Knowledge: Another skill that is considered important when interpreting data is to have domain knowledge. As mentioned before, data hides valuable insights that need to be uncovered. To do so, the analyst needs to know about the industry or domain from which the information is coming and use that knowledge to explore it and put it into a broader context. This is especially valuable in a business context, where most departments are now analyzing data independently with the help of a live dashboard instead of relying on the IT department, which can often overlook some aspects due to a lack of expertise in the topic. 

Common Data Analysis And Interpretation Problems

Man running away from common data interpretation problems

The oft-repeated mantra of those who fear data advancements in the digital age is “big data equals big trouble.” While that statement is not accurate, it is safe to say that certain data interpretation problems or “pitfalls” exist and can occur when analyzing data, especially at the speed of thought. Let’s identify some of the most common data misinterpretation risks and shed some light on how they can be avoided:

1) Correlation mistaken for causation: our first misinterpretation of data refers to the tendency of data analysts to mix the cause of a phenomenon with correlation. It is the assumption that because two actions occurred together, one caused the other. This is inaccurate, as actions can occur together, absent a cause-and-effect relationship.

  • Digital age example: assuming that increased revenue results from increased social media followers… there might be a definitive correlation between the two, especially with today’s multi-channel purchasing experiences. But that does not mean an increase in followers is the direct cause of increased revenue. There could be both a common cause and an indirect causality.
  • Remedy: attempt to eliminate the variable you believe to be causing the phenomenon.

2) Confirmation bias: our second problem is data interpretation bias. It occurs when you have a theory or hypothesis in mind but are intent on only discovering data patterns that support it while rejecting those that do not.

  • Digital age example: your boss asks you to analyze the success of a recent multi-platform social media marketing campaign. While analyzing the potential data variables from the campaign (one that you ran and believe performed well), you see that the share rate for Facebook posts was great, while the share rate for Twitter Tweets was not. Using only Facebook posts to prove your hypothesis that the campaign was successful would be a perfect manifestation of confirmation bias.
  • Remedy: as this pitfall is often based on subjective desires, one remedy would be to analyze data with a team of objective individuals. If this is not possible, another solution is to resist the urge to make a conclusion before data exploration has been completed. Remember to always try to disprove a hypothesis, not prove it.

3) Irrelevant data: the third data misinterpretation pitfall is especially important in the digital age. As large data is no longer centrally stored and as it continues to be analyzed at the speed of thought, it is inevitable that analysts will focus on data that is irrelevant to the problem they are trying to correct.

  • Digital age example: in attempting to gauge the success of an email lead generation campaign, you notice that the number of homepage views directly resulting from the campaign increased, but the number of monthly newsletter subscribers did not. Based on the number of homepage views, you decide the campaign was a success when really it generated zero leads.
  • Remedy: proactively and clearly frame any data analysis variables and KPIs prior to engaging in a data review. If the metric you use to measure the success of a lead generation campaign is newsletter subscribers, there is no need to review the number of homepage visits. Be sure to focus on the data variable that answers your question or solves your problem and not on irrelevant data.

4) Truncating an Axes: When creating a graph to start interpreting the results of your analysis, it is important to keep the axes truthful and avoid generating misleading visualizations. Starting the axes in a value that doesn’t portray the actual truth about the data can lead to false conclusions. 

  • Digital age example: In the image below, we can see a graph from Fox News in which the Y-axes start at 34%, making it seem that the difference between 35% and 39.6% is way higher than it actually is. This could lead to a misinterpretation of the tax rate changes. 

Fox news graph truncating an axes

* Source : www.venngage.com *

  • Remedy: Be careful with how your data is visualized. Be respectful and realistic with axes to avoid misinterpretation of your data. See below how the Fox News chart looks when using the correct axis values. This chart was created with datapine's modern online data visualization tool.

Fox news graph with the correct axes values

5) (Small) sample size: Another common problem is using a small sample size. Logically, the bigger the sample size, the more accurate and reliable the results. However, this also depends on the size of the effect of the study. For example, the sample size in a survey about the quality of education will not be the same as for one about people doing outdoor sports in a specific area. 

  • Digital age example: Imagine you ask 30 people a question, and 29 answer “yes,” resulting in 95% of the total. Now imagine you ask the same question to 1000, and 950 of them answer “yes,” which is again 95%. While these percentages might look the same, they certainly do not mean the same thing, as a 30-person sample size is not a significant number to establish a truthful conclusion. 
  • Remedy: Researchers say that in order to determine the correct sample size to get truthful and meaningful results, it is necessary to define a margin of error that will represent the maximum amount they want the results to deviate from the statistical mean. Paired with this, they need to define a confidence level that should be between 90 and 99%. With these two values in hand, researchers can calculate an accurate sample size for their studies.

6) Reliability, subjectivity, and generalizability : When performing qualitative analysis, researchers must consider practical and theoretical limitations when interpreting the data. In some cases, this type of research can be considered unreliable because of uncontrolled factors that might or might not affect the results. This is paired with the fact that the researcher has a primary role in the interpretation process, meaning he or she decides what is relevant and what is not, and as we know, interpretations can be very subjective.

Generalizability is also an issue that researchers face when dealing with qualitative analysis. As mentioned in the point about having a small sample size, it is difficult to draw conclusions that are 100% representative because the results might be biased or unrepresentative of a wider population. 

While these factors are mostly present in qualitative research, they can also affect the quantitative analysis. For example, when choosing which KPIs to portray and how to portray them, analysts can also be biased and represent them in a way that benefits their analysis.

  • Digital age example: Biased questions in a survey are a great example of reliability and subjectivity issues. Imagine you are sending a survey to your clients to see how satisfied they are with your customer service with this question: “How amazing was your experience with our customer service team?”. Here, we can see that this question clearly influences the response of the individual by putting the word “amazing” on it. 
  • Remedy: A solution to avoid these issues is to keep your research honest and neutral. Keep the wording of the questions as objective as possible. For example: “On a scale of 1-10, how satisfied were you with our customer service team?”. This does not lead the respondent to any specific answer, meaning the results of your survey will be reliable. 

Data Interpretation Best Practices & Tips

Data interpretation methods and techniques by datapine

Data analysis and interpretation are critical to developing sound conclusions and making better-informed decisions. As we have seen with this article, there is an art and science to the interpretation of data. To help you with this purpose, we will list a few relevant techniques, methods, and tricks you can implement for a successful data management process. 

As mentioned at the beginning of this post, the first step to interpreting data in a successful way is to identify the type of analysis you will perform and apply the methods respectively. Clearly differentiate between qualitative (observe, document, and interview notice, collect and think about things) and quantitative analysis (you lead research with a lot of numerical data to be analyzed through various statistical methods). 

1) Ask the right data interpretation questions

The first data interpretation technique is to define a clear baseline for your work. This can be done by answering some critical questions that will serve as a useful guideline to start. Some of them include: what are the goals and objectives of my analysis? What type of data interpretation method will I use? Who will use this data in the future? And most importantly, what general question am I trying to answer?

Once all this information has been defined, you will be ready for the next step: collecting your data. 

2) Collect and assimilate your data

Now that a clear baseline has been established, it is time to collect the information you will use. Always remember that your methods for data collection will vary depending on what type of analysis method you use, which can be qualitative or quantitative. Based on that, relying on professional online data analysis tools to facilitate the process is a great practice in this regard, as manually collecting and assessing raw data is not only very time-consuming and expensive but is also at risk of errors and subjectivity. 

Once your data is collected, you need to carefully assess it to understand if the quality is appropriate to be used during a study. This means, is the sample size big enough? Were the procedures used to collect the data implemented correctly? Is the date range from the data correct? If coming from an external source, is it a trusted and objective one? 

With all the needed information in hand, you are ready to start the interpretation process, but first, you need to visualize your data. 

3) Use the right data visualization type 

Data visualizations such as business graphs , charts, and tables are fundamental to successfully interpreting data. This is because data visualization via interactive charts and graphs makes the information more understandable and accessible. As you might be aware, there are different types of visualizations you can use, but not all of them are suitable for any analysis purpose. Using the wrong graph can lead to misinterpretation of your data, so it’s very important to carefully pick the right visual for it. Let’s look at some use cases of common data visualizations. 

  • Bar chart: One of the most used chart types, the bar chart uses rectangular bars to show the relationship between 2 or more variables. There are different types of bar charts for different interpretations, including the horizontal bar chart, column bar chart, and stacked bar chart. 
  • Line chart: Most commonly used to show trends, acceleration or decelerations, and volatility, the line chart aims to show how data changes over a period of time, for example, sales over a year. A few tips to keep this chart ready for interpretation are not using many variables that can overcrowd the graph and keeping your axis scale close to the highest data point to avoid making the information hard to read. 
  • Pie chart: Although it doesn’t do a lot in terms of analysis due to its uncomplex nature, pie charts are widely used to show the proportional composition of a variable. Visually speaking, showing a percentage in a bar chart is way more complicated than showing it in a pie chart. However, this also depends on the number of variables you are comparing. If your pie chart needs to be divided into 10 portions, then it is better to use a bar chart instead. 
  • Tables: While they are not a specific type of chart, tables are widely used when interpreting data. Tables are especially useful when you want to portray data in its raw format. They give you the freedom to easily look up or compare individual values while also displaying grand totals. 

With the use of data visualizations becoming more and more critical for businesses’ analytical success, many tools have emerged to help users visualize their data in a cohesive and interactive way. One of the most popular ones is the use of BI dashboards . These visual tools provide a centralized view of various graphs and charts that paint a bigger picture of a topic. We will discuss the power of dashboards for an efficient data interpretation practice in the next portion of this post. If you want to learn more about different types of graphs and charts , take a look at our complete guide on the topic. 

4) Start interpreting 

After the tedious preparation part, you can start extracting conclusions from your data. As mentioned many times throughout the post, the way you decide to interpret the data will solely depend on the methods you initially decided to use. If you had initial research questions or hypotheses, then you should look for ways to prove their validity. If you are going into the data with no defined hypothesis, then start looking for relationships and patterns that will allow you to extract valuable conclusions from the information. 

During the process of interpretation, stay curious and creative, dig into the data, and determine if there are any other critical questions that should be asked. If any new questions arise, you need to assess if you have the necessary information to answer them. Being able to identify if you need to dedicate more time and resources to the research is a very important step. No matter if you are studying customer behaviors or a new cancer treatment, the findings from your analysis may dictate important decisions in the future. Therefore, taking the time to really assess the information is key. For that purpose, data interpretation software proves to be very useful.

5) Keep your interpretation objective

As mentioned above, objectivity is one of the most important data interpretation skills but also one of the hardest. Being the person closest to the investigation, it is easy to become subjective when looking for answers in the data. A good way to stay objective is to show the information related to the study to other people, for example, research partners or even the people who will use your findings once they are done. This can help avoid confirmation bias and any reliability issues with your interpretation. 

Remember, using a visualization tool such as a modern dashboard will make the interpretation process way easier and more efficient as the data can be navigated and manipulated in an easy and organized way. And not just that, using a dashboard tool to present your findings to a specific audience will make the information easier to understand and the presentation way more engaging thanks to the visual nature of these tools. 

6) Mark your findings and draw conclusions

Findings are the observations you extracted from your data. They are the facts that will help you drive deeper conclusions about your research. For example, findings can be trends and patterns you found during your interpretation process. To put your findings into perspective, you can compare them with other resources that use similar methods and use them as benchmarks.

Reflect on your own thinking and reasoning and be aware of the many pitfalls data analysis and interpretation carry—correlation versus causation, subjective bias, false information, inaccurate data, etc. Once you are comfortable with interpreting the data, you will be ready to develop conclusions, see if your initial questions were answered, and suggest recommendations based on them.

Interpretation of Data: The Use of Dashboards Bridging The Gap

As we have seen, quantitative and qualitative methods are distinct types of data interpretation and analysis. Both offer a varying degree of return on investment (ROI) regarding data investigation, testing, and decision-making. But how do you mix the two and prevent a data disconnect? The answer is professional data dashboards. 

For a few years now, dashboards have become invaluable tools to visualize and interpret data. These tools offer a centralized and interactive view of data and provide the perfect environment for exploration and extracting valuable conclusions. They bridge the quantitative and qualitative information gap by unifying all the data in one place with the help of stunning visuals. 

Not only that, but these powerful tools offer a large list of benefits, and we will discuss some of them below. 

1) Connecting and blending data. With today’s pace of innovation, it is no longer feasible (nor desirable) to have bulk data centrally located. As businesses continue to globalize and borders continue to dissolve, it will become increasingly important for businesses to possess the capability to run diverse data analyses absent the limitations of location. Data dashboards decentralize data without compromising on the necessary speed of thought while blending both quantitative and qualitative data. Whether you want to measure customer trends or organizational performance, you now have the capability to do both without the need for a singular selection.

2) Mobile Data. Related to the notion of “connected and blended data” is that of mobile data. In today’s digital world, employees are spending less time at their desks and simultaneously increasing production. This is made possible because mobile solutions for analytical tools are no longer standalone. Today, mobile analysis applications seamlessly integrate with everyday business tools. In turn, both quantitative and qualitative data are now available on-demand where they’re needed, when they’re needed, and how they’re needed via interactive online dashboards .

3) Visualization. Data dashboards merge the data gap between qualitative and quantitative data interpretation methods through the science of visualization. Dashboard solutions come “out of the box” and are well-equipped to create easy-to-understand data demonstrations. Modern online data visualization tools provide a variety of color and filter patterns, encourage user interaction, and are engineered to help enhance future trend predictability. All of these visual characteristics make for an easy transition among data methods – you only need to find the right types of data visualization to tell your data story the best way possible.

4) Collaboration. Whether in a business environment or a research project, collaboration is key in data interpretation and analysis. Dashboards are online tools that can be easily shared through a password-protected URL or automated email. Through them, users can collaborate and communicate through the data in an efficient way. Eliminating the need for infinite files with lost updates. Tools such as datapine offer real-time updates, meaning your dashboards will update on their own as soon as new information is available.  

Examples Of Data Interpretation In Business

To give you an idea of how a dashboard can fulfill the need to bridge quantitative and qualitative analysis and help in understanding how to interpret data in research thanks to visualization, below, we will discuss three valuable examples to put their value into perspective.

1. Customer Satisfaction Dashboard 

This market research dashboard brings together both qualitative and quantitative data that are knowledgeably analyzed and visualized in a meaningful way that everyone can understand, thus empowering any viewer to interpret it. Let’s explore it below. 

Data interpretation example on customers' satisfaction with a brand

**click to enlarge**

The value of this template lies in its highly visual nature. As mentioned earlier, visuals make the interpretation process way easier and more efficient. Having critical pieces of data represented with colorful and interactive icons and graphs makes it possible to uncover insights at a glance. For example, the colors green, yellow, and red on the charts for the NPS and the customer effort score allow us to conclude that most respondents are satisfied with this brand with a short glance. A further dive into the line chart below can help us dive deeper into this conclusion, as we can see both metrics developed positively in the past 6 months. 

The bottom part of the template provides visually stunning representations of different satisfaction scores for quality, pricing, design, and service. By looking at these, we can conclude that, overall, customers are satisfied with this company in most areas. 

2. Brand Analysis Dashboard

Next, in our list of data interpretation examples, we have a template that shows the answers to a survey on awareness for Brand D. The sample size is listed on top to get a perspective of the data, which is represented using interactive charts and graphs. 

Data interpretation example using a market research dashboard for brand awareness analysis

When interpreting information, context is key to understanding it correctly. For that reason, the dashboard starts by offering insights into the demographics of the surveyed audience. In general, we can see ages and gender are diverse. Therefore, we can conclude these brands are not targeting customers from a specified demographic, an important aspect to put the surveyed answers into perspective. 

Looking at the awareness portion, we can see that brand B is the most popular one, with brand D coming second on both questions. This means brand D is not doing wrong, but there is still room for improvement compared to brand B. To see where brand D could improve, the researcher could go into the bottom part of the dashboard and consult the answers for branding themes and celebrity analysis. These are important as they give clear insight into what people and messages the audience associates with brand D. This is an opportunity to exploit these topics in different ways and achieve growth and success. 

3. Product Innovation Dashboard 

Our third and last dashboard example shows the answers to a survey on product innovation for a technology company. Just like the previous templates, the interactive and visual nature of the dashboard makes it the perfect tool to interpret data efficiently and effectively. 

Market research results on product innovation, useful for product development and pricing decisions as an example of data interpretation using dashboards

Starting from right to left, we first get a list of the top 5 products by purchase intention. This information lets us understand if the product being evaluated resembles what the audience already intends to purchase. It is a great starting point to see how customers would respond to the new product. This information can be complemented with other key metrics displayed in the dashboard. For example, the usage and purchase intention track how the market would receive the product and if they would purchase it, respectively. Interpreting these values as positive or negative will depend on the company and its expectations regarding the survey. 

Complementing these metrics, we have the willingness to pay. Arguably, one of the most important metrics to define pricing strategies. Here, we can see that most respondents think the suggested price is a good value for money. Therefore, we can interpret that the product would sell for that price. 

To see more data analysis and interpretation examples for different industries and functions, visit our library of business dashboards .

To Conclude…

As we reach the end of this insightful post about data interpretation and analysis, we hope you have a clear understanding of the topic. We've covered the definition and given some examples and methods to perform a successful interpretation process.

The importance of data interpretation is undeniable. Dashboards not only bridge the information gap between traditional data interpretation methods and technology, but they can help remedy and prevent the major pitfalls of the process. As a digital age solution, they combine the best of the past and the present to allow for informed decision-making with maximum data interpretation ROI.

To start visualizing your insights in a meaningful and actionable way, test our online reporting software for free with our 14-day trial !

  • Privacy Policy

Research Method

Home » Data Interpretation – Process, Methods and Questions

Data Interpretation – Process, Methods and Questions

Table of Contents

Data Interpretation

Data Interpretation

Definition :

Data interpretation refers to the process of making sense of data by analyzing and drawing conclusions from it. It involves examining data in order to identify patterns, relationships, and trends that can help explain the underlying phenomena being studied. Data interpretation can be used to make informed decisions and solve problems across a wide range of fields, including business, science, and social sciences.

Data Interpretation Process

Here are the steps involved in the data interpretation process:

  • Define the research question : The first step in data interpretation is to clearly define the research question. This will help you to focus your analysis and ensure that you are interpreting the data in a way that is relevant to your research objectives.
  • Collect the data: The next step is to collect the data. This can be done through a variety of methods such as surveys, interviews, observation, or secondary data sources.
  • Clean and organize the data : Once the data has been collected, it is important to clean and organize it. This involves checking for errors, inconsistencies, and missing data. Data cleaning can be a time-consuming process, but it is essential to ensure that the data is accurate and reliable.
  • Analyze the data: The next step is to analyze the data. This can involve using statistical software or other tools to calculate summary statistics, create graphs and charts, and identify patterns in the data.
  • Interpret the results: Once the data has been analyzed, it is important to interpret the results. This involves looking for patterns, trends, and relationships in the data. It also involves drawing conclusions based on the results of the analysis.
  • Communicate the findings : The final step is to communicate the findings. This can involve creating reports, presentations, or visualizations that summarize the key findings of the analysis. It is important to communicate the findings in a way that is clear and concise, and that is tailored to the audience’s needs.

Types of Data Interpretation

There are various types of data interpretation techniques used for analyzing and making sense of data. Here are some of the most common types:

Descriptive Interpretation

This type of interpretation involves summarizing and describing the key features of the data. This can involve calculating measures of central tendency (such as mean, median, and mode), measures of dispersion (such as range, variance, and standard deviation), and creating visualizations such as histograms, box plots, and scatterplots.

Inferential Interpretation

This type of interpretation involves making inferences about a larger population based on a sample of the data. This can involve hypothesis testing, where you test a hypothesis about a population parameter using sample data, or confidence interval estimation, where you estimate a range of values for a population parameter based on sample data.

Predictive Interpretation

This type of interpretation involves using data to make predictions about future outcomes. This can involve building predictive models using statistical techniques such as regression analysis, time-series analysis, or machine learning algorithms.

Exploratory Interpretation

This type of interpretation involves exploring the data to identify patterns and relationships that were not previously known. This can involve data mining techniques such as clustering analysis, principal component analysis, or association rule mining.

Causal Interpretation

This type of interpretation involves identifying causal relationships between variables in the data. This can involve experimental designs, such as randomized controlled trials, or observational studies, such as regression analysis or propensity score matching.

Data Interpretation Methods

There are various methods for data interpretation that can be used to analyze and make sense of data. Here are some of the most common methods:

Statistical Analysis

This method involves using statistical techniques to analyze the data. Statistical analysis can involve descriptive statistics (such as measures of central tendency and dispersion), inferential statistics (such as hypothesis testing and confidence interval estimation), and predictive modeling (such as regression analysis and time-series analysis).

Data Visualization

This method involves using visual representations of the data to identify patterns and trends. Data visualization can involve creating charts, graphs, and other visualizations, such as heat maps or scatterplots.

Text Analysis

This method involves analyzing text data, such as survey responses or social media posts, to identify patterns and themes. Text analysis can involve techniques such as sentiment analysis, topic modeling, and natural language processing.

Machine Learning

This method involves using algorithms to identify patterns in the data and make predictions or classifications. Machine learning can involve techniques such as decision trees, neural networks, and random forests.

Qualitative Analysis

This method involves analyzing non-numeric data, such as interviews or focus group discussions, to identify themes and patterns. Qualitative analysis can involve techniques such as content analysis, grounded theory, and narrative analysis.

Geospatial Analysis

This method involves analyzing spatial data, such as maps or GPS coordinates, to identify patterns and relationships. Geospatial analysis can involve techniques such as spatial autocorrelation, hot spot analysis, and clustering.

Applications of Data Interpretation

Data interpretation has a wide range of applications across different fields, including business, healthcare, education, social sciences, and more. Here are some examples of how data interpretation is used in different applications:

  • Business : Data interpretation is widely used in business to inform decision-making, identify market trends, and optimize operations. For example, businesses may analyze sales data to identify the most popular products or customer demographics, or use predictive modeling to forecast demand and adjust pricing accordingly.
  • Healthcare : Data interpretation is critical in healthcare for identifying disease patterns, evaluating treatment effectiveness, and improving patient outcomes. For example, healthcare providers may use electronic health records to analyze patient data and identify risk factors for certain diseases or conditions.
  • Education : Data interpretation is used in education to assess student performance, identify areas for improvement, and evaluate the effectiveness of instructional methods. For example, schools may analyze test scores to identify students who are struggling and provide targeted interventions to improve their performance.
  • Social sciences : Data interpretation is used in social sciences to understand human behavior, attitudes, and perceptions. For example, researchers may analyze survey data to identify patterns in public opinion or use qualitative analysis to understand the experiences of marginalized communities.
  • Sports : Data interpretation is increasingly used in sports to inform strategy and improve performance. For example, coaches may analyze performance data to identify areas for improvement or use predictive modeling to assess the likelihood of injuries or other risks.

When to use Data Interpretation

Data interpretation is used to make sense of complex data and to draw conclusions from it. It is particularly useful when working with large datasets or when trying to identify patterns or trends in the data. Data interpretation can be used in a variety of settings, including scientific research, business analysis, and public policy.

In scientific research, data interpretation is often used to draw conclusions from experiments or studies. Researchers use statistical analysis and data visualization techniques to interpret their data and to identify patterns or relationships between variables. This can help them to understand the underlying mechanisms of their research and to develop new hypotheses.

In business analysis, data interpretation is used to analyze market trends and consumer behavior. Companies can use data interpretation to identify patterns in customer buying habits, to understand market trends, and to develop marketing strategies that target specific customer segments.

In public policy, data interpretation is used to inform decision-making and to evaluate the effectiveness of policies and programs. Governments and other organizations use data interpretation to track the impact of policies and programs over time, to identify areas where improvements are needed, and to develop evidence-based policy recommendations.

In general, data interpretation is useful whenever large amounts of data need to be analyzed and understood in order to make informed decisions.

Data Interpretation Examples

Here are some real-time examples of data interpretation:

  • Social media analytics : Social media platforms generate vast amounts of data every second, and businesses can use this data to analyze customer behavior, track sentiment, and identify trends. Data interpretation in social media analytics involves analyzing data in real-time to identify patterns and trends that can help businesses make informed decisions about marketing strategies and customer engagement.
  • Healthcare analytics: Healthcare organizations use data interpretation to analyze patient data, track outcomes, and identify areas where improvements are needed. Real-time data interpretation can help healthcare providers make quick decisions about patient care, such as identifying patients who are at risk of developing complications or adverse events.
  • Financial analysis: Real-time data interpretation is essential for financial analysis, where traders and analysts need to make quick decisions based on changing market conditions. Financial analysts use data interpretation to track market trends, identify opportunities for investment, and develop trading strategies.
  • Environmental monitoring : Real-time data interpretation is important for environmental monitoring, where data is collected from various sources such as satellites, sensors, and weather stations. Data interpretation helps to identify patterns and trends that can help predict natural disasters, track changes in the environment, and inform decision-making about environmental policies.
  • Traffic management: Real-time data interpretation is used for traffic management, where traffic sensors collect data on traffic flow, congestion, and accidents. Data interpretation helps to identify areas where traffic congestion is high, and helps traffic management authorities make decisions about road maintenance, traffic signal timing, and other strategies to improve traffic flow.

Data Interpretation Questions

Data Interpretation Questions samples:

  • Medical : What is the correlation between a patient’s age and their risk of developing a certain disease?
  • Environmental Science: What is the trend in the concentration of a certain pollutant in a particular body of water over the past 10 years?
  • Finance : What is the correlation between a company’s stock price and its quarterly revenue?
  • Education : What is the trend in graduation rates for a particular high school over the past 5 years?
  • Marketing : What is the correlation between a company’s advertising budget and its sales revenue?
  • Sports : What is the trend in the number of home runs hit by a particular baseball player over the past 3 seasons?
  • Social Science: What is the correlation between a person’s level of education and their income level?

In order to answer these questions, you would need to analyze and interpret the data using statistical methods, graphs, and other visualization tools.

Purpose of Data Interpretation

The purpose of data interpretation is to make sense of complex data by analyzing and drawing insights from it. The process of data interpretation involves identifying patterns and trends, making comparisons, and drawing conclusions based on the data. The ultimate goal of data interpretation is to use the insights gained from the analysis to inform decision-making.

Data interpretation is important because it allows individuals and organizations to:

  • Understand complex data : Data interpretation helps individuals and organizations to make sense of complex data sets that would otherwise be difficult to understand.
  • Identify patterns and trends : Data interpretation helps to identify patterns and trends in data, which can reveal important insights about the underlying processes and relationships.
  • Make informed decisions: Data interpretation provides individuals and organizations with the information they need to make informed decisions based on the insights gained from the data analysis.
  • Evaluate performance : Data interpretation helps individuals and organizations to evaluate their performance over time and to identify areas where improvements can be made.
  • Communicate findings: Data interpretation allows individuals and organizations to communicate their findings to others in a clear and concise manner, which is essential for informing stakeholders and making changes based on the insights gained from the analysis.

Characteristics of Data Interpretation

Here are some characteristics of data interpretation:

  • Contextual : Data interpretation is always contextual, meaning that the interpretation of data is dependent on the context in which it is analyzed. The same data may have different meanings depending on the context in which it is analyzed.
  • Iterative : Data interpretation is an iterative process, meaning that it often involves multiple rounds of analysis and refinement as more data becomes available or as new insights are gained from the analysis.
  • Subjective : Data interpretation is often subjective, as it involves the interpretation of data by individuals who may have different perspectives and biases. It is important to acknowledge and address these biases when interpreting data.
  • Analytical : Data interpretation involves the use of analytical tools and techniques to analyze and draw insights from data. These may include statistical analysis, data visualization, and other data analysis methods.
  • Evidence-based : Data interpretation is evidence-based, meaning that it is based on the data and the insights gained from the analysis. It is important to ensure that the data used in the analysis is accurate, relevant, and reliable.
  • Actionable : Data interpretation is actionable, meaning that it provides insights that can be used to inform decision-making and to drive action. The ultimate goal of data interpretation is to use the insights gained from the analysis to improve performance or to achieve specific goals.

Advantages of Data Interpretation

Data interpretation has several advantages, including:

  • Improved decision-making: Data interpretation provides insights that can be used to inform decision-making. By analyzing data and drawing insights from it, individuals and organizations can make informed decisions based on evidence rather than intuition.
  • Identification of patterns and trends: Data interpretation helps to identify patterns and trends in data, which can reveal important insights about the underlying processes and relationships. This information can be used to improve performance or to achieve specific goals.
  • Evaluation of performance: Data interpretation helps individuals and organizations to evaluate their performance over time and to identify areas where improvements can be made. By analyzing data, organizations can identify strengths and weaknesses and make changes to improve their performance.
  • Communication of findings: Data interpretation allows individuals and organizations to communicate their findings to others in a clear and concise manner, which is essential for informing stakeholders and making changes based on the insights gained from the analysis.
  • Better resource allocation: Data interpretation can help organizations allocate resources more efficiently by identifying areas where resources are needed most. By analyzing data, organizations can identify areas where resources are being underutilized or where additional resources are needed to improve performance.
  • Improved competitiveness : Data interpretation can give organizations a competitive advantage by providing insights that help to improve performance, reduce costs, or identify new opportunities for growth.

Limitations of Data Interpretation

Data interpretation has some limitations, including:

  • Limited by the quality of data: The quality of data used in data interpretation can greatly impact the accuracy of the insights gained from the analysis. Poor quality data can lead to incorrect conclusions and decisions.
  • Subjectivity: Data interpretation can be subjective, as it involves the interpretation of data by individuals who may have different perspectives and biases. This can lead to different interpretations of the same data.
  • Limited by analytical tools: The analytical tools and techniques used in data interpretation can also limit the accuracy of the insights gained from the analysis. Different analytical tools may yield different results, and some tools may not be suitable for certain types of data.
  • Time-consuming: Data interpretation can be a time-consuming process, particularly for large and complex data sets. This can make it difficult to quickly make decisions based on the insights gained from the analysis.
  • Incomplete data: Data interpretation can be limited by incomplete data sets, which may not provide a complete picture of the situation being analyzed. Incomplete data can lead to incorrect conclusions and decisions.
  • Limited by context: Data interpretation is always contextual, meaning that the interpretation of data is dependent on the context in which it is analyzed. The same data may have different meanings depending on the context in which it is analyzed.

Difference between Data Interpretation and Data Analysis

Data interpretation and data analysis are two different but closely related processes in data-driven decision-making.

Data analysis refers to the process of examining and examining data using statistical and computational methods to derive insights and conclusions from it. It involves cleaning, transforming, and modeling the data to uncover patterns, relationships, and trends that can help in understanding the underlying phenomena.

Data interpretation, on the other hand, refers to the process of making sense of the findings from the data analysis by contextualizing them within the larger problem domain. It involves identifying the key takeaways from the data analysis, assessing their relevance and significance to the problem at hand, and communicating the insights in a clear and actionable manner.

In short, data analysis is about uncovering insights from the data, while data interpretation is about making sense of those insights and translating them into actionable recommendations.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

  • 13 min read

What is Data Interpretation? Methods, Examples & Tools

What is Data Interpretation Methods Examples Tools

What is Data Interpretation?

  • Importance of Data Interpretation in Today's World

Types of Data Interpretation

Quantitative data interpretation, qualitative data interpretation, mixed methods data interpretation, methods of data interpretation, descriptive statistics, inferential statistics, visualization techniques, benefits of data interpretation, data interpretation process, data interpretation use cases, data interpretation tools, data interpretation challenges and solutions, overcoming bias in data, dealing with missing data, addressing data privacy concerns, data interpretation examples, sales trend analysis, customer segmentation, predictive maintenance, fraud detection, data interpretation best practices, maintaining data quality, choosing the right tools, effective communication of results, ongoing learning and development, data interpretation tips.

Data interpretation is the process of making sense of data and turning it into actionable insights. With the rise of big data and advanced technologies, it has become more important than ever to be able to effectively interpret and understand data.

In today's fast-paced business environment, companies rely on data to make informed decisions and drive growth. However, with the sheer volume of data available, it can be challenging to know where to start and how to make the most of it.

This guide provides a comprehensive overview of data interpretation, covering everything from the basics of what it is to the benefits and best practices.

Data interpretation refers to the process of taking raw data and transforming it into useful information. This involves analyzing the data to identify patterns, trends, and relationships, and then presenting the results in a meaningful way. Data interpretation is an essential part of data analysis, and it is used in a wide range of fields, including business, marketing, healthcare, and many more.

Importance of Data Interpretation in Today's World

Data interpretation is critical to making informed decisions and driving growth in today's data-driven world. With the increasing availability of data, companies can now gain valuable insights into their operations, customer behavior, and market trends. Data interpretation allows businesses to make informed decisions, identify new opportunities, and improve overall efficiency.

There are three main types of data interpretation: quantitative, qualitative, and mixed methods.

Quantitative data interpretation refers to the process of analyzing numerical data. This type of data is often used to measure and quantify specific characteristics, such as sales figures, customer satisfaction ratings, and employee productivity.

Qualitative data interpretation refers to the process of analyzing non-numerical data, such as text, images, and audio. This data type is often used to gain a deeper understanding of customer attitudes and opinions and to identify patterns and trends.

Mixed methods data interpretation combines both quantitative and qualitative data to provide a more comprehensive understanding of a particular subject. This approach is particularly useful when analyzing data that has both numerical and non-numerical components, such as customer feedback data.

There are several data interpretation methods, including descriptive statistics, inferential statistics, and visualization techniques.

Descriptive statistics involve summarizing and presenting data in a way that makes it easy to understand. This can include calculating measures such as mean, median, mode, and standard deviation.

Inferential statistics involves making inferences and predictions about a population based on a sample of data. This type of data interpretation involves the use of statistical models and algorithms to identify patterns and relationships in the data.

Visualization techniques involve creating visual representations of data, such as graphs, charts, and maps. These techniques are particularly useful for communicating complex data in an easy-to-understand manner and identifying data patterns and trends.

How To Share Only One Tab in Google Sheets

When sharing a Google Sheets spreadsheet Google usually tries to share the entire document. Here’s how to share only one tab instead.

Data interpretation plays a crucial role in decision-making and helps organizations make informed choices. There are numerous benefits of data interpretation, including:

  • Improved decision-making: Data interpretation provides organizations with the information they need to make informed decisions. By analyzing data, organizations can identify trends, patterns, and relationships that they may not have been able to see otherwise.
  • Increased efficiency: By automating the data interpretation process, organizations can save time and improve their overall efficiency. With the right tools and methods, data interpretation can be completed quickly and accurately, providing organizations with the information they need to make decisions more efficiently.
  • Better collaboration: Data interpretation can help organizations work more effectively with others, such as stakeholders, partners, and clients. By providing a common understanding of the data and its implications, organizations can collaborate more effectively and make better decisions.
  • Increased accuracy: Data interpretation helps to ensure that data is accurate and consistent, reducing the risk of errors and miscommunication. By using data interpretation techniques, organizations can identify errors and inconsistencies in their data, making it possible to correct them and ensure the accuracy of their information.
  • Enhanced transparency: Data interpretation can also increase transparency, helping organizations demonstrate their commitment to ethical and responsible data management. By providing clear and concise information, organizations can build trust and credibility with their stakeholders.
  • Better resource allocation: Data interpretation can help organizations make better decisions about resource allocation. By analyzing data, organizations can identify areas where they are spending too much time or money and make adjustments to optimize their resources.
  • Improved planning and forecasting: Data interpretation can also help organizations plan for the future. By analyzing historical data, organizations can identify trends and patterns that inform their forecasting and planning efforts.

Data interpretation is a process that involves several steps, including:

  • Data collection: The first step in data interpretation is to collect data from various sources, such as surveys, databases, and websites. This data should be relevant to the issue or problem the organization is trying to solve.
  • Data preparation: Once data is collected, it needs to be prepared for analysis. This may involve cleaning the data to remove errors, missing values, or outliers. It may also include transforming the data into a more suitable format for analysis.
  • Data analysis: The next step is to analyze the data using various techniques, such as statistical analysis, visualization, and modeling. This analysis should be focused on uncovering trends, patterns, and relationships in the data.
  • Data interpretation: Once the data has been analyzed, it needs to be interpreted to determine what the results mean. This may involve identifying key insights, drawing conclusions, and making recommendations.
  • Data communication: The final step in the data interpretation process is to communicate the results and insights to others. This may involve creating visualizations, reports, or presentations to share the results with stakeholders.

Data interpretation can be applied in a variety of settings and industries. Here are a few examples of how data interpretation can be used:

  • Marketing: Marketers use data interpretation to analyze customer behavior, preferences, and trends to inform marketing strategies and campaigns.
  • Healthcare: Healthcare professionals use data interpretation to analyze patient data, including medical histories and test results, to diagnose and treat illnesses.
  • Financial Services: Financial services companies use data interpretation to analyze financial data, such as investment performance, to inform investment decisions and strategies.
  • Retail: Retail companies use data interpretation to analyze sales data, customer behavior, and market trends to inform merchandising and pricing strategies.
  • Manufacturing: Manufacturers use data interpretation to analyze production data, such as machine performance and inventory levels, to inform production and inventory management decisions.

These are just a few examples of how data interpretation can be applied in various settings. The possibilities are endless, and data interpretation can provide valuable insights in any industry where data is collected and analyzed.

Data interpretation is a crucial step in the data analysis process, and the right tools can make a significant difference in accuracy and efficiency. Here are a few tools that can help you with data interpretation:

  • Share parts of your spreadsheet, including sheets or even cell ranges, with different collaborators or stakeholders.
  • Review and approve edits by collaborators to their respective sheets before merging them back with your master spreadsheet.
  • Integrate popular tools and connect your tech stack to sync data from different sources, giving you a timely, holistic view of your data.
  • Google Sheets: Google Sheets is a free, web-based spreadsheet application that allows users to create, edit, and format spreadsheets. It provides a range of features for data interpretation, including functions, charts, and pivot tables.
  • Microsoft Excel: Microsoft Excel is a spreadsheet software widely used for data interpretation. It provides various functions and features to help you analyze and interpret data, including sorting, filtering, pivot tables, and charts.
  • Tableau: Tableau is a data visualization tool that helps you see and understand your data. It allows you to connect to various data sources and create interactive dashboards and visualizations to communicate insights.
  • Power BI: Power BI is a business analytics service that provides interactive visualizations and business intelligence capabilities with an easy interface for end users to create their own reports and dashboards.
  • R: R is a programming language and software environment for statistical computing and graphics. It is widely used by statisticians, data scientists, and researchers to analyze and interpret data.

Each of these tools has its strengths and weaknesses, and the right tool for you will depend on your specific needs and requirements. Consider the size and complexity of your data, the analysis methods you need to use, and the level of customization you require, before making a decision.

How to Password Protect a Google Sheet

If you work with important data in Google Sheets, you probably want an extra layer of protection. Here's how you can password protect a Google Sheet

Data interpretation can be a complex and challenging process, but there are several solutions that can help overcome some of the most common difficulties.

Data interpretation can often be biased based on the data sources and the people who interpret it. It is important to eliminate these biases to get a clear and accurate understanding of the data. This can be achieved by diversifying the data sources, involving multiple stakeholders in the data interpretation process, and regularly reviewing the data interpretation methodology.

Missing data can often result in inaccuracies in the data interpretation process. To overcome this challenge, data scientists can use imputation methods to fill in missing data or use statistical models that can account for missing data.

Data privacy is a crucial concern in today's data-driven world. To address this, organizations should ensure that their data interpretation processes align with data privacy regulations and that the data being analyzed is adequately secured.

Data interpretation is used in a variety of industries and for a range of purposes. Here are a few examples:

Sales trend analysis is a common use of data interpretation in the business world. This type of analysis involves looking at sales data over time to identify trends and patterns, which can then be used to make informed business decisions.

Customer segmentation is a data interpretation technique that categorizes customers into segments based on common characteristics. This can be used to create more targeted marketing campaigns and to improve customer engagement.

Predictive maintenance is a data interpretation technique that uses machine learning algorithms to predict when equipment is likely to fail. This can help organizations proactively address potential issues and reduce downtime.

Fraud detection is a use case for data interpretation involving data and machine learning algorithms to identify patterns and anomalies that may indicate fraudulent activity.

To ensure that data interpretation processes are as effective and accurate as possible, it is recommended to follow some best practices.

Data quality is critical to the accuracy of data interpretation. To maintain data quality, organizations should regularly review and validate their data, eliminate data biases, and address missing data.

Choosing the right data interpretation tools is crucial to the success of the data interpretation process. Organizations should consider factors such as cost, compatibility with existing tools and processes, and the complexity of the data to be analyzed when choosing the right data interpretation tool. Layer, an add-on that equips teams with the tools to increase efficiency and data quality in their processes on top of Google Sheets, is an excellent choice for organizations looking to optimize their data interpretation process.

Data interpretation results need to be communicated effectively to stakeholders in a way they can understand. This can be achieved by using visual aids such as charts and graphs and presenting the results clearly and concisely.

The world of data interpretation is constantly evolving, and organizations must stay up to date with the latest developments and best practices. Ongoing learning and development initiatives, such as attending workshops and conferences, can help organizations stay ahead of the curve.

Regardless of the data interpretation method used, following best practices can help ensure accurate and reliable results. These best practices include:

  • Validate data sources: It is essential to validate the data sources used to ensure they are accurate, up-to-date, and relevant. This helps to minimize the potential for errors in the data interpretation process.
  • Use appropriate statistical techniques: The choice of statistical methods used for data interpretation should be suitable for the type of data being analyzed. For example, regression analysis is often used for analyzing trends in large data sets, while chi-square tests are used for categorical data.
  • Graph and visualize data: Graphical representations of data can help to quickly identify patterns and trends. Visualization tools like histograms, scatter plots, and bar graphs can make the data more understandable and easier to interpret.
  • Document and explain results: Results from data interpretation should be documented and presented in a clear and concise manner. This includes providing context for the results and explaining how they were obtained.
  • Use a robust data interpretation tool: Data interpretation tools can help to automate the process and minimize the risk of errors. However, choosing a reliable, user-friendly tool that provides the features and functionalities needed to support the data interpretation process is vital.

Data interpretation is a crucial aspect of data analysis and enables organizations to turn large amounts of data into actionable insights. The guide covered the definition, importance, types, methods, benefits, process, analysis, tools, use cases, and best practices of data interpretation.

As technology continues to advance, the methods and tools used in data interpretation will also evolve. Predictive analytics and artificial intelligence will play an increasingly important role in data interpretation as organizations strive to automate and streamline their data analysis processes. In addition, big data and the Internet of Things (IoT) will lead to the generation of vast amounts of data that will need to be analyzed and interpreted effectively.

Data interpretation is a critical skill that enables organizations to make informed decisions based on data. It is essential that organizations invest in data interpretation and the development of their in-house data interpretation skills, whether through training programs or the use of specialized tools like Layer. By staying up-to-date with the latest trends and best practices in data interpretation, organizations can maximize the value of their data and drive growth and success.

Hady has a passion for tech, marketing, and spreadsheets. Besides his Computer Science degree, he has vast experience in developing, launching, and scaling content marketing processes at SaaS startups.

Layer is now Sheetgo

Automate your procesess on top of spreadsheets.

Leeds Beckett University

Skills for Learning : Research Skills

Data analysis is an ongoing process that should occur throughout your research project. Suitable data-analysis methods must be selected when you write your research proposal. The nature of your data (i.e. quantitative or qualitative) will be influenced by your research design and purpose. The data will also influence the analysis methods selected.

We run interactive workshops to help you develop skills related to doing research, such as data analysis, writing literature reviews and preparing for dissertations. Find out more on the Skills for Learning Workshops page.

We have online academic skills modules within MyBeckett for all levels of university study. These modules will help your academic development and support your success at LBU. You can work through the modules at your own pace, revisiting them as required. Find out more from our FAQ What academic skills modules are available?

Quantitative data analysis

Broadly speaking, 'statistics' refers to methods, tools and techniques used to collect, organise and interpret data. The goal of statistics is to gain understanding from data. Therefore, you need to know how to:

  • Produce data – for example, by handing out a questionnaire or doing an experiment.
  • Organise, summarise, present and analyse data.
  • Draw valid conclusions from findings.

There are a number of statistical methods you can use to analyse data. Choosing an appropriate statistical method should follow naturally, however, from your research design. Therefore, you should think about data analysis at the early stages of your study design. You may need to consult a statistician for help with this.

Tips for working with statistical data

  • Plan so that the data you get has a good chance of successfully tackling the research problem. This will involve reading literature on your subject, as well as on what makes a good study.
  • To reach useful conclusions, you need to reduce uncertainties or 'noise'. Thus, you will need a sufficiently large data sample. A large sample will improve precision. However, this must be balanced against the 'costs' (time and money) of collection.
  • Consider the logistics. Will there be problems in obtaining sufficient high-quality data? Think about accuracy, trustworthiness and completeness.
  • Statistics are based on random samples. Consider whether your sample will be suited to this sort of analysis. Might there be biases to think about?
  • How will you deal with missing values (any data that is not recorded for some reason)? These can result from gaps in a record or whole records being missed out.
  • When analysing data, start by looking at each variable separately. Conduct initial/exploratory data analysis using graphical displays. Do this before looking at variables in conjunction or anything more complicated. This process can help locate errors in the data and also gives you a 'feel' for the data.
  • Look out for patterns of 'missingness'. They are likely to alert you if there’s a problem. If the 'missingness' is not random, then it will have an impact on the results.
  • Be vigilant and think through what you are doing at all times. Think critically. Statistics are not just mathematical tricks that a computer sorts out. Rather, analysing statistical data is a process that the human mind must interpret!

Top tips! Try inventing or generating the sort of data you might get and see if you can analyse it. Make sure that your process works before gathering actual data. Think what the output of an analytic procedure will look like before doing it for real.

(Note: it is actually difficult to generate realistic data. There are fraud-detection methods in place to identify data that has been fabricated. So, remember to get rid of your practice data before analysing the real stuff!)

Statistical software packages

Software packages can be used to analyse and present data. The most widely used ones are SPSS and NVivo.

SPSS is a statistical-analysis and data-management package for quantitative data analysis. Click on ‘ How do I install SPSS? ’ to learn how to download SPSS to your personal device. SPSS can perform a wide variety of statistical procedures. Some examples are:

  • Data management (i.e. creating subsets of data or transforming data).
  • Summarising, describing or presenting data (i.e. mean, median and frequency).
  • Looking at the distribution of data (i.e. standard deviation).
  • Comparing groups for significant differences using parametric (i.e. t-test) and non-parametric (i.e. Chi-square) tests.
  • Identifying significant relationships between variables (i.e. correlation).

NVivo can be used for qualitative data analysis. It is suitable for use with a wide range of methodologies. Click on ‘ How do I access NVivo ’ to learn how to download NVivo to your personal device. NVivo supports grounded theory, survey data, case studies, focus groups, phenomenology, field research and action research.

  • Process data such as interview transcripts, literature or media extracts, and historical documents.
  • Code data on screen and explore all coding and documents interactively.
  • Rearrange, restructure, extend and edit text, coding and coding relationships.
  • Search imported text for words, phrases or patterns, and automatically code the results.

Qualitative data analysis

Miles and Huberman (1994) point out that there are diverse approaches to qualitative research and analysis. They suggest, however, that it is possible to identify 'a fairly classic set of analytic moves arranged in sequence'. This involves:

  • Affixing codes to a set of field notes drawn from observation or interviews.
  • Noting reflections or other remarks in the margins.
  • Sorting/sifting through these materials to identify: a) similar phrases, relationships between variables, patterns and themes and b) distinct differences between subgroups and common sequences.
  • Isolating these patterns/processes and commonalties/differences. Then, taking them out to the field in the next wave of data collection.
  • Highlighting generalisations and relating them to your original research themes.
  • Taking the generalisations and analysing them in relation to theoretical perspectives.

        (Miles and Huberman, 1994.)

Patterns and generalisations are usually arrived at through a process of analytic induction (see above points 5 and 6). Qualitative analysis rarely involves statistical analysis of relationships between variables. Qualitative analysis aims to gain in-depth understanding of concepts, opinions or experiences.

Presenting information

There are a number of different ways of presenting and communicating information. The particular format you use is dependent upon the type of data generated from the methods you have employed.

Here are some appropriate ways of presenting information for different types of data:

Bar charts: These   may be useful for comparing relative sizes. However, they tend to use a large amount of ink to display a relatively small amount of information. Consider a simple line chart as an alternative.

Pie charts: These have the benefit of indicating that the data must add up to 100%. However, they make it difficult for viewers to distinguish relative sizes, especially if two slices have a difference of less than 10%.

Other examples of presenting data in graphical form include line charts and  scatter plots .

Qualitative data is more likely to be presented in text form. For example, using quotations from interviews or field diaries.

  • Plan ahead, thinking carefully about how you will analyse and present your data.
  • Think through possible restrictions to resources you may encounter and plan accordingly.
  • Find out about the different IT packages available for analysing your data and select the most appropriate.
  • If necessary, allow time to attend an introductory course on a particular computer package. You can book SPSS and NVivo workshops via MyHub .
  • Code your data appropriately, assigning conceptual or numerical codes as suitable.
  • Organise your data so it can be analysed and presented easily.
  • Choose the most suitable way of presenting your information, according to the type of data collected. This will allow your information to be understood and interpreted better.

Primary, secondary and tertiary sources

Information sources are sometimes categorised as primary, secondary or tertiary sources depending on whether or not they are ‘original’ materials or data. For some research projects, you may need to use primary sources as well as secondary or tertiary sources. However the distinction between primary and secondary sources is not always clear and depends on the context. For example, a newspaper article might usually be categorised as a secondary source. But it could also be regarded as a primary source if it were an article giving a first-hand account of a historical event written close to the time it occurred.

  • Primary sources
  • Secondary sources
  • Tertiary sources
  • Grey literature

Primary sources are original sources of information that provide first-hand accounts of what is being experienced or researched. They enable you to get as close to the actual event or research as possible. They are useful for getting the most contemporary information about a topic.

Examples include diary entries, newspaper articles, census data, journal articles with original reports of research, letters, email or other correspondence, original manuscripts and archives, interviews, research data and reports, statistics, autobiographies, exhibitions, films, and artists' writings.

Some information will be available on an Open Access basis, freely accessible online. However, many academic sources are paywalled, and you may need to login as a Leeds Beckett student to access them. Where Leeds Beckett does not have access to a source, you can use our  Request It! Service .

Secondary sources interpret, evaluate or analyse primary sources. They're useful for providing background information on a topic, or for looking back at an event from a current perspective. The majority of your literature searching will probably be done to find secondary sources on your topic.

Examples include journal articles which review or interpret original findings, popular magazine articles commenting on more serious research, textbooks and biographies.

The term tertiary sources isn't used a great deal. There's overlap between what might be considered a secondary source and a tertiary source. One definition is that a tertiary source brings together secondary sources.

Examples include almanacs, fact books, bibliographies, dictionaries and encyclopaedias, directories, indexes and abstracts. They can be useful for introductory information or an overview of a topic in the early stages of research.

Depending on your subject of study, grey literature may be another source you need to use. Grey literature includes technical or research reports, theses and dissertations, conference papers, government documents, white papers, and so on.

Artificial intelligence tools

Before using any generative artificial intelligence or paraphrasing tools in your assessments, you should check if this is permitted on your course.

If their use is permitted on your course, you must  acknowledge any use of generative artificial intelligence tools  such as ChatGPT or paraphrasing tools (e.g., Grammarly, Quillbot, etc.), even if you have only used them to generate ideas for your assessments or for proofreading.

  • Academic Integrity Module in MyBeckett
  • Assignment Calculator
  • Building on Feedback
  • Disability Advice
  • Essay X-ray tool
  • International Students' Academic Introduction
  • Manchester Academic Phrasebank
  • Quote, Unquote
  • Skills and Subject Suppor t
  • Turnitin Grammar Checker

{{You can add more boxes below for links specific to this page [this note will not appear on user pages] }}

  • Research Methods Checklist
  • Sampling Checklist

Skills for Learning FAQs

Library & Student Services

0113 812 1000

  • University Disclaimer
  • Accessibility
  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Politics of Development
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Qualitative Research (2nd edn)

  • < Previous chapter
  • Next chapter >

31 Interpretation In Qualitative Research: What, Why, How

Allen Trent, College of Education, University of Wyoming

Jeasik Cho, Department of Educational Studies, University of Wyoming

  • Published: 02 September 2020
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter addresses a wide range of concepts related to interpretation in qualitative research, examines the meaning and importance of interpretation in qualitative inquiry, and explores the ways methodology, data, and the self/researcher as instrument interact and impact interpretive processes. Additionally, the chapter presents a series of strategies for qualitative researchers engaged in the process of interpretation and closes by presenting a framework for qualitative researchers designed to inform their interpretations. The framework includes attention to the key qualitative research concepts transparency, reflexivity, analysis, validity, evidence, and literature. Four questions frame the chapter: What is interpretation, and why are interpretive strategies important in qualitative research? How do methodology, data, and the researcher/self impact interpretation in qualitative research? How do qualitative researchers engage in the process of interpretation? And, in what ways can a framework for interpretation strategies support qualitative researchers across multiple methodologies and paradigms?

“ All human knowledge takes the form of interpretation.” In this seemingly simple statement, the late German philosopher Walter Benjamin asserted that all knowledge is mediated and constructed. In doing so, he situates himself as an interpretivist, one who believes that human subjectivity, individuals’ characteristics, feelings, opinions, and experiential backgrounds impact observations, analysis of these observations, and resultant knowledge/truth constructions. Hammersley ( 2013 ) noted,

People—unlike atoms … actively interpret or make sense of their environment and of themselves; the ways in which they do this are shaped by the particular cultures in which they live; and these distinctive cultural orientations will strongly influence not only what they believe but also what they do. (p. 26)

Contrast this perspective with positivist claims that knowledge is based exclusively on external facts, objectively observed and recorded. Interpretivists, then, acknowledge that if positivistic notions of knowledge and truth are inadequate to explain social phenomena, then positivist, hard science approaches to research (i.e., the scientific method and its variants) are also inadequate and can even have a detrimental impact. According to Polyani (1967), “The ideal of exact science would turn out to be fundamentally misleading and possibly a source of devastating fallacies” (as cited in Packer, 2018 , p. 71). So, although the literature often contrasts quantitative and qualitative research as largely a difference in kinds of data employed (numerical vs. linguistic), instead, the primary differentiation is in the foundational, paradigmatic assumptions about truth, knowledge, and objectivity.

This chapter is about interpretation and the strategies that qualitative researchers use to interpret a wide variety of “texts.” Knowledge, we assert, is constructed, both individually (constructivism) and socially (constructionism). We accept this as our starting point. Our aim here is to share our perspective on a broad set of concepts associated with the interpretive, or meaning-making, process. Although it may happen at different times and in different ways, interpretation is part of almost all qualitative research.

Qualitative research is an umbrella term that encompasses a wide array of paradigmatic views, goals, and methods. Still, there are key unifying elements that include a generally constructionist epistemological standpoint, attention to primarily linguistic data, and generally accepted protocols or syntax for conducting research. Typically, qualitative researchers begin with a starting point—a curiosity, a problem in need of solutions, a research question, and/or a desire to better understand a situation from the “native” perspectives of the individuals who inhabit that context. This is what anthropologists call the emic , or insider’s, perspective. Olivier de Sardan ( 2015 ) wrote, “It evokes the meaning that social facts have for the actors concerned. It is opposed to the term etic , which, at times, designates more external or ‘objective’ data, and, at others, the researcher’s interpretive analysis” (p. 65).

From this starting point, researchers determine the appropriate kinds of data to collect, engage in fieldwork as participant observers to gather these data, organize the data, look for patterns, and attempt to understand the emic perspectives while integrating their own emergent interpretations. Researchers construct meaning from data by synthesizing research “findings,” “assertions,” or “theories” that can be shared so that others may also gain insights from the conducted inquiry. This interpretive process has a long history; hermeneutics, the theory of interpretation, blossomed in the 17th century in the form of biblical exegesis (Packer, 2018 ).

Although there are commonalities that cut across most forms of qualitative research, this is not to say that there is an accepted, linear, standardized approach. To be sure, there are an infinite number of variations and nuances in the qualitative research process. For example, some forms of inquiry begin with a firm research question; others start without even a clear focus for study. Grounded theorists begin data analysis and interpretation very early in the research process, whereas some case study researchers, for example, may collect data in the field for a period of time before seriously considering the data and its implications. Some ethnographers may be a part of the context (e.g., observing in classrooms), but they may assume more observer-like roles, as opposed to actively participating in the context. Alternatively, action researchers, in studying issues related to their own practice, are necessarily situated toward the participant end of the participant–observer continuum.

Our focus here is on one integrated part of the qualitative research process, interpretation, the hermeneutic process of collective and individual “meaning making.” Like Willig ( 2017 ), we believe “interpretation is at the heart of qualitative research because qualitative research is concerned with meaning and the process of meaning-making … qualitative data … needs to be given meaning by the researcher” (p. 276). As we discuss throughout this chapter, researchers take a variety of approaches to interpretation in qualitative work. Four general questions guide our explorations:

What is interpretation, and why are interpretive strategies important in qualitative research?

How do methodology, data, and the researcher/self impact interpretation in qualitative research?

How do qualitative researchers engage in the process of interpretation?

In what ways can a framework for interpretation strategies support qualitative researchers across multiple methodological and paradigmatic views?

We address each of these guiding questions in our attempt to explicate our interpretation of “interpretation” and, as educational researchers, we include examples from our own work to illustrate some key concepts.

What Is Interpretation, and Why Are Interpretive Strategies Important in Qualitative Research?

Qualitative researchers and those writing about qualitative methods often intertwine the terms analysis and interpretation . For example, Hubbard and Power ( 2003 ) described data analysis as “bringing order, structure, and meaning to the data” (p. 88). To us, this description combines analysis with interpretation. Although there is nothing wrong with this construction, our understanding aligns more closely with Mills’s ( 2018 ) claim that, “put simply, analysis involves summarizing what’s in the data, whereas interpretation involves making sense of—finding meaning in—that data” (p. 176). Hesse-Biber ( 2017 ) also separated out the essential process of interpretation. She described the steps in qualitative analysis and interpretation as data preparation, data exploration, and data reduction (all part of Mills’s “analysis” processes), followed by interpretation (pp. 307–328). Willig ( 2017 ) elaborated: analysis, she claims, is “sober and systematic,” whereas interpretation is associated with “creativity and the imagination … interpretation is seen as stimulating, it is interesting and it can be illuminating” (p. 276). For the purpose of this chapter, we will adhere to Mills’s distinction, understanding analysis as summarizing and organizing and interpretation as meaning making. Unavoidably, these closely related processes overlap and interact, but our focus will be primarily on the more complex of these endeavors, interpretation. Interpretation, in this sense, is in part translation, but translation is not an objective act. Instead, translation necessarily involves selectivity and the ascribing of meaning. Qualitative researchers “aim beneath manifest behavior to the meaning events have for those who experience them” (Eisner, 1991 , p. 35). The presentation of these insider/emic perspectives, coupled with researchers’ own interpretations, is a hallmark of qualitative research.

Qualitative researchers have long borrowed from extant models for fieldwork and interpretation. Approaches from anthropology and the arts have become especially prominent. For example, Eisner’s ( 1991 ) form of qualitative inquiry, educational criticism , draws heavily on accepted models of art criticism. T. Barrett ( 2011 ), an authority on art criticism, described interpretation as a complex set of processes based on a set of principles. We believe many of these principles apply as readily to qualitative research as they do to critique. The following principles, adapted from T. Barrett’s principles of interpretation (2011), inform our examination:

Qualitative phenomena have “aboutness” : All social phenomena have meaning, but meanings in this context can be multiple, even contradictory.

Interpretations are persuasive arguments : All interpretations are arguments, and qualitative researchers, like critics, strive to build strong arguments grounded in the information, or data, available.

  Some interpretations are better than others : Barrett noted that “some interpretations are better argued, better grounded with evidence, and therefore more reasonable, more certain, and more acceptable than others.” This contradicts the argument that “all interpretations are equal,” heard in the common refrain, “Well, that’s just your interpretation.”

There can be different, competing, and contradictory interpretations of the same phenomena : As noted at the beginning of this chapter, we acknowledge that subjectivity matters, and, unavoidably, it impacts one’s interpretations. As Barrett noted, “Interpretations are often based on a worldview.”

Interpretations are not (and cannot be) “right,” but instead, they can be more or less reasonable, convincing, and informative : There is never one “true” interpretation, but some interpretations are more compelling than others.

Interpretations can be judged by coherence, correspondence, and inclusiveness : Does the argument/interpretation make sense (coherence)? Does the interpretation fit the data (correspondence)? Have all data been attended to, including outlier data that do not necessarily support identified themes (inclusiveness)?

Interpretation is ultimately a communal endeavor : Initial interpretations may be incomplete, nearsighted, and/or narrow, but eventually these interpretations become richer, broader, and more inclusive. Feminist revisionist history projects are an exemplary case. Over time, the writing, art, and cultural contributions of countless women, previously ignored, diminished, or distorted, have come to be accepted as prominent contributions given serious consideration.

So, meaning is conferred; interpretations are socially constructed arguments; multiple interpretations are to be expected; and some interpretations are better than others. As we discuss later in this chapter, what makes an interpretation “better” often hinges on the purpose/goals of the research in question. Interpretations designed to generate theory, or generalizable rules, will be better for responding to research questions aligned with the aims of more traditional quantitative/positivist research, whereas interpretations designed to construct meanings through social interaction, to generate multiple perspectives, and to represent the context-specific perspectives of the research participants are better for researchers constructing thick, contextually rich descriptions, stories, or narratives. The former relies on more atomistic interpretive strategies, whereas the latter adheres to a more holistic approach (Willis, 2007 ). Both approaches to analysis/interpretation are addressed in more detail later in this chapter.

At this point, readers might ask, Why does interpretation matter, anyway? Our response to this question involves the distinctive nature of interpretation and the ability of the interpretive process to put unique fingerprints on an otherwise relatively static set of data. Once interview data are collected and transcribed (and we realize that even the process of transcription is, in part, interpretive), documents are collected, and observations are recorded, qualitative researchers could just, in good faith and with fidelity, represent the data in as straightforward ways as possible, allowing readers to “see for themselves” by sharing as much actual data (e.g., the transcribed words of the research participants) as possible. This approach, however, includes analysis, what we have defined as summarizing and organizing data for presentation, but it falls short of what we reference and define as interpretation—attempting to explain the meaning of others’ words and actions. According to Lichtman ( 2013 ),

While early efforts at qualitative research might have stopped at description, it is now more generally accepted that a qualitative researcher goes beyond pure description.… Many believe that it is the role of the researcher to bring understanding, interpretation, and meaning. (p. 17)

Because we are fond of the arts and arts-based approaches to qualitative research, an example from the late jazz drummer, Buddy Rich, seems fitting. Rich explains the importance of having the flexibility to interpret: “I don’t think any arranger should ever write a drum part for a drummer, because if a drummer can’t create his own interpretation of the chart, and he plays everything that’s written, he becomes mechanical; he has no freedom.” The same is true for qualitative researchers: without the freedom to interpret, the researcher merely regurgitates, attempting to share with readers/reviewers exactly what the research subjects shared with him or her. It is only through interpretation that the researcher, as collaborator with unavoidable subjectivities, is able to construct unique, contextualized meaning. Interpretation, then, in this sense, is knowledge construction.

In closing this section, we will illustrate the analysis-versus-interpretation distinction with the following transcript excerpt. In this study, the authors (Trent & Zorko, 2006 ) were studying student teaching from the perspective of K–12 students. This quote comes from a high school student in a focus group interview. She is describing a student teacher she had:

The right-hand column contains codes or labels applied to parts of the transcript text. Coding will be discussed in more depth later in this chapter, but for now, note that the codes are mostly summarizing the main ideas of the text, sometimes using the exact words of the research participant. This type of coding is a part of what we have called analysis—organizing and summarizing the data. It is a way of beginning to say “what is” there. As noted, though, most qualitative researchers go deeper. They want to know more than what is; they also ask, What does it mean? This is a question of interpretation.

Specific to the transcript excerpt, researchers might next begin to cluster the early codes into like groups. For example, the teacher “felt targeted,” “assumed kids were going to behave inappropriately,” and appeared to be “overwhelmed.” A researcher might cluster this group of codes in a category called “teacher feelings and perceptions” and may then cluster the codes “could not control class” and “students off task” into a category called “classroom management.” The researcher then, in taking a fresh look at these categories and the included codes, may begin to conclude that what is going on in this situation is that the student teacher does not have sufficient training in classroom management models and strategies and may also be lacking the skills she needs to build relationships with her students. These then would be interpretations, persuasive arguments connected to the study’s data. In this specific example, the researchers might proceed to write a memo about these emerging interpretations. In this memo, they might more clearly define their early categories and may also look through other data to see if there are other codes or categories that align with or overlap this initial analysis. They may write further about their emergent interpretations and, in doing so, may inform future data collection in ways that will allow them to either support or refute their early interpretations. These researchers will also likely find that the processes of analysis and interpretation are inextricably intertwined. Good interpretations very often depend on thorough and thoughtful analyses.

How Do Methodology, Data, and the Researcher/Self Impact Interpretation in Qualitative Research?

Methodological conventions guide interpretation and the use of interpretive strategies. For example, in grounded theory and in similar methodological traditions, “formal analysis begins early in the study and is nearly completed by the end of data collection” (Bogdan & Biklen, 2007 , p. 73). Alternatively, for researchers from other traditions, for example, case study researchers, “formal analysis and theory development [interpretation] do not occur until after the data collection is near complete” (p. 73).

Researchers subscribing to methodologies that prescribe early data analysis and interpretation may employ methods like analytic induction or the constant comparison method. In using analytic induction, researchers develop a rough definition of the phenomena under study; collect data to compare to this rough definition; modify the definition as needed, based on cases that both fit and do not fit the definition; and, finally, establish a clear, universal definition (theory) of the phenomena (Robinson, 1951, cited in Bogdan & Biklen, 2007 , p. 73). Generally, those using a constant comparison approach begin data collection immediately; identify key issues, events, and activities related to the study that then become categories of focus; collect data that provide incidents of these categories; write about and describe the categories, accounting for specific incidents and seeking others; discover basic processes and relationships; and, finally, code and write about the categories as theory, “grounded” in the data (Glaser, 1965 ). Although processes like analytic induction and constant comparison can be listed as steps to follow, in actuality, these are more typically recursive processes in which the researcher repeatedly goes back and forth between the data and emerging analyses and interpretations.

In addition to methodological conventions that prescribe data analysis early (e.g., grounded theory) or later (e.g., case study) in the inquiry process, methodological approaches also impact the general approach to analysis and interpretation. Ellingson ( 2011 ) situated qualitative research methodologies on a continuum spanning “science”-like approaches on one end juxtaposed with “art”-like approaches on the other.

Researchers pursuing a more science-oriented approach seek valid, reliable, generalizable knowledge; believe in neutral, objective researchers; and ultimately claim single, authoritative interpretations. Researchers adhering to these science-focused, postpositivistic approaches may count frequencies, emphasize the validity of the employed coding system, and point to intercoder reliability and random sampling as criteria that bolster the research credibility. Researchers at or near the science end of the continuum might employ analysis and interpretation strategies that include “paired comparisons,” “pile sorts,” “word counts,” identifying “key words in context,” and “triad tests” (Bernard, Wutich, & Ryan, 2017 , pp. 112, 381, 113, 170). These researchers may ultimately seek to develop taxonomies or other authoritative final products that organize and explain the collected data.

For example, in a study we conducted about preservice teachers’ experiences learning to teach second-language learners, the researchers collected larger data sets and used a statistical analysis package to analyze survey data, and the resultant findings included descriptive statistics. These survey results were supported with open-ended, qualitative data. For example, one of the study’s findings was that “a strong majority of candidates (96%) agreed that an immersion approach alone will not guarantee academic or linguistic success for second language learners.” In narrative explanations, one preservice teacher, representative of many others, remarked, “There has to be extra instructional efforts to help their students learn English … they won’t learn English by merely sitting in the classrooms” (Cho, Rios, Trent, & Mayfield, 2012 , p. 75).

Methodologies on the art side of Ellingson’s ( 2011 ) continuum, alternatively, “value humanistic, openly subjective knowledge, such as that embodied in stories, poetry, photography, and painting” (p. 599). Analysis and interpretation in these (often more contemporary) methodological approaches do not strive for “social scientific truth,” but instead are formulated to “enable us to learn about ourselves, each other, and the world through encountering the unique lens of a person’s (or a group’s) passionate rendering of a reality into a moving, aesthetic expression of meaning” (p. 599). For these “artistic/interpretivists, truths are multiple, fluctuating and ambiguous” (p. 599). Methodologies taking more subjective approaches to analysis and interpretation include autoethnography, testimonio, performance studies, feminist theorists/researchers, and others from related critical methodological forms of qualitative practice. More specifically arts-based approaches include poetic inquiry, fiction-based research, music as method, and dance and movement as inquiry (Leavy, 2017 ). Interpretation in these approaches is inherent. For example, “ interpretive poetry is understood as a method of merging the participant’s words with the researcher’s perspective” (Leavy, 2017 , p. 82).

As an example, one of us engaged in an artistic inquiry with a group of students in an art class for elementary teachers. We called it “Dreams as Data” and, among the project aims, we wanted to gather participants’ “dreams for education in the future” and display these dreams in an accessible, interactive, artistic display (see Trent, 2002 ). The intent was not to statistically analyze the dreams/data; instead, it was more universal. We wanted, as Ellingson ( 2011 , p. 599) noted, to use participant responses in ways that “enable us to learn about ourselves, each other, and the world.” The decision was made to leave responses intact and to share the whole/raw data set in the artistic display in ways that allowed the viewers to holistically analyze and interpret for themselves. Additionally, the researcher (Trent, 2002 ) collaborated with his students to construct their own contextually situated interpretations of the data. The following text is an excerpt from one participant’s response:

Almost a century ago, John Dewey eloquently wrote about the need to imagine and create the education that ALL children deserve, not just the richest, the Whitest, or the easiest to teach. At the dawn of this new century, on some mornings, I wake up fearful that we are further away from this ideal than ever.… Collective action, in a critical, hopeful, joyful, anti-racist and pro-justice spirit, is foremost in my mind as I reflect on and act in my daily work.… Although I realize the constraints on teachers and schools in the current political arena, I do believe in the power of teachers to stand next to, encourage, and believe in the students they teach—in short, to change lives. (Trent, 2002 , p. 49)

In sum, researchers whom Ellingson ( 2011 ) characterized as being on the science end of the continuum typically use more detailed or atomistic strategies to analyze and interpret qualitative data, whereas those toward the artistic end most often employ more holistic strategies. Both general approaches to qualitative data analysis and interpretation, atomistic and holistic, will be addressed later in this chapter.

As noted, qualitative researchers attend to data in a wide variety of ways depending on paradigmatic and epistemological beliefs, methodological conventions, and the purpose/aims of the research. These factors impact the kinds of data collected and the ways these data are ultimately analyzed and interpreted. For example, life history or testimonio researchers conduct extensive individual interviews, ethnographers record detailed observational notes, critical theorists may examine documents from pop culture, and ethnomethodologists may collect videotapes of interaction for analysis and interpretation.

In addition to the wide range of data types that are collected by qualitative researchers (and most qualitative researchers collect multiple forms of data), qualitative researchers, again influenced by the factors noted earlier, employ a variety of approaches to analyzing and interpreting data. As mentioned earlier in this chapter, some advocate for a detailed/atomistic, fine-grained approach to data (see, e.g., Bernard et al., 2017 ); others prefer a more broad-based, holistic, “eyeballing” of the data. According to Willis ( 2007 ), “Eyeballers reject the more structured approaches to analysis that break down the data into small units and, from the perspective of the eyeballers, destroy the wholeness and some of the meaningfulness of the data” (p. 298).

Regardless, we assert, as illustrated in Figure 31.1 , that as the process evolves, data collection becomes less prominent later in the process, as interpretation and making sense/meaning of the data becomes more prominent. It is through this emphasis on interpretation that qualitative researchers put their individual imprints on the data, allowing for the emergence of multiple, rich perspectives. This space for interpretation allows researchers the freedom Buddy Rich alluded to in his quote about interpreting musical charts. Without this freedom, Rich noted that the process would simply be “mechanical.” Furthermore, allowing space for multiple interpretations nourishes the perspectives of many others in the community. Writer and theorist Meg Wheatley explained, “Everyone in a complex system has a slightly different interpretation. The more interpretations we gather, the easier it becomes to gain a sense of the whole.” In qualitative research, “there is no ‘getting it right’ because there could be many ‘rights’ ” (as cited in Lichtman, 2013 ).

Increasing Role of Interpretation in Data Analysis

In addition to the roles methodology and data play in the interpretive process, perhaps the most important is the role of the self/the researcher in the interpretive process. According to Lichtman ( 2013 ), “Data are collected, information is gathered, settings are viewed, and realities are constructed through his or her eyes and ears … the qualitative researcher interprets and makes sense of the data” (p. 21). Eisner ( 1991 ) supported the notion of the researcher “self as instrument,” noting that expert researchers know not simply what to attend to, but also what to neglect. He describes the researcher’s role in the interpretive process as combining sensibility , the ability to observe and ascertain nuances, with schema , a deep understanding or cognitive framework of the phenomena under study.

J. Barrett ( 2007 ) described self/researcher roles as “transformations” (p. 418) at multiple points throughout the inquiry process: early in the process, researchers create representations through data generation, conducting observations and interviews and collecting documents and artifacts. Then,

transformation occurs when the “raw” data generated in the field are shaped into data records by the researcher. These data records are produced through organizing and reconstructing the researcher’s notes and transcribing audio and video recordings in the form of permanent records that serve as the “evidentiary warrants” of the generated data. The researcher strives to capture aspects of the phenomenal world with fidelity by selecting salient aspects to incorporate into the data record. (J. Barrett, 2007 , p. 418)

Transformation continues when the researcher codes, categorizes, and explores patterns in the data (the process we call analysis).

Transformations also involve interpreting what the data mean and relating these interpretations to other sources of insight about the phenomena, including findings from related research, conceptual literature, and common experience.… Data analysis and interpretation are often intertwined and rely upon the researcher’s logic, artistry, imagination, clarity, and knowledge of the field under study. (J. Barrett, 2007 , p. 418)

We mentioned the often-blended roles of participation and observation earlier in this chapter. The role(s) of the self/researcher are often described as points along a participant–observer continuum (see, e.g., Bogdan & Biklen, 2007 ). On the far observer end of this continuum, the researcher situates as detached, tries to be inconspicuous (so as not to impact/disrupt the phenomena under study), and approaches the studied context as if viewing it from behind a one-way mirror. On the opposite, participant end, the researcher is completely immersed and involved in the context. It would be difficult for an outsider to distinguish between researcher and subjects. For example, “some feminist researchers and postmodernists take a political stance and have an agenda that places the researcher in an activist posture. These researchers often become quite involved with the individuals they study and try to improve their human condition” (Lichtman, 2013 , p. 17).

We assert that most researchers fall somewhere between these poles. We believe that complete detachment is both impossible and misguided. In doing so, we, along with many others, acknowledge (and honor) the role of subjectivity, the researcher’s beliefs, opinions, biases, and predispositions. Positivist researchers seeking objective data and accounts either ignore the impact of subjectivity or attempt to drastically diminish/eliminate its impact. Even qualitative researchers have developed methods to avoid researcher subjectivity affecting research data collection, analysis, and interpretation. For example, foundational phenomenologist Husserl ( 1913/1962 ) developed the concept of bracketing , what Lichtman describes as “trying to identify your views on the topic and then putting them aside” (2013, p. 22). Like Slotnick and Janesick ( 2011 ), we ultimately claim “it is impossible to bracket yourself” (p. 1358). Instead, we take a balanced approach, like Eisner, understanding that subjectivity allows researchers to produce the rich, idiosyncratic, insightful, and yet data-based interpretations and accounts of lived experience that accomplish the primary purposes of qualitative inquiry. Eisner ( 1991 ) wrote, “Rather than regarding uniformity and standardization as the summum bonum, educational criticism [Eisner’s form of qualitative research] views unique insight as the higher good” (p. 35). That said, we also claim that, just because we acknowledge and value the role of researcher subjectivity, researchers are still obligated to ground their findings in reasonable interpretations of the data. Eisner ( 1991 ) explained:

This appreciation for personal insight as a source of meaning does not provide a license for freedom. Educational critics must provide evidence and reasons. But they reject the assumption that unique interpretation is a conceptual liability in understanding, and they see the insights secured from multiple views as more attractive than the comforts provided by a single right one. (p. 35)

Connected to this participant–observer continuum is the way the researcher positions him- or herself in relation to the “subjects” of the study. Traditionally, researchers, including early qualitative researchers, anthropologists, and ethnographers, referenced those studied as subjects . More recently, qualitative researchers better understand that research should be a reciprocal process in which both researcher and the foci of the research should derive meaningful benefit. Researchers aligned with this thinking frequently use the term participants to describe those groups and individuals included in a study. Going a step further, some researchers view research participants as experts on the studied topic and as equal collaborators in the meaning-making process. In these instances, researchers often use the terms co-researchers or co-investigators .

The qualitative researcher, then, plays significant roles throughout the inquiry process. These roles include transforming data, collaborating with research participants or co-researchers, determining appropriate points to situate along the participant–observer continuum, and ascribing personal insights, meanings, and interpretations that are both unique and justified with data exemplars. Performing these roles unavoidably impacts and changes the researcher. Slotnick and Janesick ( 2011 ) noted, “Since, in qualitative research the individual is the research instrument through which all data are passed, interpreted, and reported, the scholar’s role is constantly evolving as self evolves” (p. 1358).

As we note later, key in all this is for researchers to be transparent about the topics discussed in the preceding section: What methodological conventions have been employed and why? How have data been treated throughout the inquiry to arrive at assertions and findings that may or may not be transferable to other idiosyncratic contexts? And, finally, in what ways has the researcher/self been situated in and impacted the inquiry? Unavoidably, we assert, the self lies at the critical intersection of data and theory, and, as such, two legs of this stool, data and researcher, interact to create the third, theory.

How Do Qualitative Researchers Engage in the Process of Interpretation?

Theorists seem to have a propensity to dichotomize concepts, pulling them apart and placing binary opposites on the far ends of conceptual continuums. Qualitative research theorists are no different, and we have already mentioned some of these continua in this chapter. For example, in the previous section, we discussed the participant–observer continuum. Earlier, we referenced both Willis’s ( 2007 ) conceptualization of atomistic versus holistic approaches to qualitative analysis and interpretation and Ellingson’s ( 2011 ) science–art continuum. Each of these latter two conceptualizations inform how qualitative researchers engage in the process of interpretation.

Willis ( 2007 ) shared that the purpose of a qualitative project might be explained as “what we expect to gain from research” (p. 288). The purpose, or what we expect to gain, then guides and informs the approaches researchers might take to interpretation. Some researchers, typically positivist/postpositivist, conduct studies that aim to test theories about how the world works and/or how people behave. These researchers attempt to discover general laws, truths, or relationships that can be generalized. Others, less confident in the ability of research to attain a single, generalizable law or truth, might seek “local theory.” These researchers still seek truths, but “instead of generalizable laws or rules, they search for truths about the local context … to understand what is really happening and then to communicate the essence of this to others” (Willis, 2007 , p. 291). In both these purposes, researchers employ atomistic strategies in an inductive process in which researchers “break the data down into small units and then build broader and broader generalizations as the data analysis proceeds” (p. 317). The earlier mentioned processes of analytic induction, constant comparison, and grounded theory fit within this conceptualization of atomistic approaches to interpretation. For example, a line-by-line coding of a transcript might begin an atomistic approach to data analysis.

Alternatively, other researchers pursue distinctly different aims. Researchers with an objective description purpose focus on accurately describing the people and context under study. These researchers adhere to standards and practices designed to achieve objectivity, and their approach to interpretation falls within the binary atomistic/holistic distinction.

The purpose of hermeneutic approaches to research is to “understand the perspectives of humans. And because understanding is situational, hermeneutic research tends to look at the details of the context in which the study occurred. The result is generally rich data reports that include multiple perspectives” (Willis, 2007 , p. 293).

Still other researchers see their purpose as the creation of stories or narratives that utilize “a social process that constructs meaning through interaction … it is an effort to represent in detail the perspectives of participants … whereas description produces one truth about the topic of study, storytelling may generate multiple perspectives, interpretations, and analyses by the researcher and participants” (Willis, 2007 , p. 295).

In these latter purposes (hermeneutic, storytelling, narrative production), researchers typically employ more holistic strategies. According to Willis ( 2007 ), “Holistic approaches tend to leave the data intact and to emphasize that meaning must be derived for a contextual reading of the data rather than the extraction of data segments for detailed analysis” (p. 297). This was the case with the Dreams as Data project mentioned earlier.

We understand the propensity to dichotomize, situate concepts as binary opposites, and create neat continua between these polar descriptors. These sorts of reduction and deconstruction support our understandings and, hopefully, enable us to eventually reconstruct these ideas in meaningful ways. Still, in reality, we realize most of us will, and should, work in the middle of these conceptualizations in fluid ways that allow us to pursue strategies, processes, and theories most appropriate for the research task at hand. As noted, Ellingson ( 2011 ) set up another conceptual continuum, but, like ours, her advice was to “straddle multiple points across the field of qualitative methods” (p. 595). She explained, “I make the case for qualitative methods to be conceptualized as a continuum anchored by art and science, with vast middle spaces that embody infinite possibilities for blending artistic, expository, and social scientific ways of analysis and representation” (p. 595).

We explained at the beginning of this chapter that we view analysis as organizing and summarizing qualitative data and interpretation as constructing meaning. In this sense, analysis allows us to describe the phenomena under study. It enables us to succinctly answer what and how questions and ensures that our descriptions are grounded in the data collected. Descriptions, however, rarely respond to questions of why . Why questions are the domain of interpretation, and, as noted throughout this text, interpretation is complex. Gubrium and Holstein ( 2000 ) noted, “Traditionally, qualitative inquiry has concerned itself with what and how questions … qualitative researchers typically approach why questions cautiously, explanation is tricky business” (p. 502). Eisner ( 1991 ) described this distinctive nature of interpretation: “It means that inquirers try to account for [interpretation] what they have given account of ” (p. 35).

Our focus here is on interpretation, but interpretation requires analysis, because without clear understandings of the data and its characteristics, derived through systematic examination and organization (e.g., coding, memoing, categorizing), “interpretations” resulting from inquiry will likely be incomplete, uninformed, and inconsistent with the constructed perspectives of the study participants. Fortunately for qualitative researchers, we have many sources that lead us through analytic processes. We earlier mentioned the accepted processes of analytic induction and the constant comparison method. These detailed processes (see, e.g., Bogdan & Biklen, 2007 ) combine the inextricably linked activities of analysis and interpretation, with analysis more typically appearing as earlier steps in the process and meaning construction—interpretation—happening later.

A wide variety of resources support researchers engaged in the processes of analysis and interpretation. Saldaña ( 2011 ), for example, provided a detailed description of coding types and processes. He showed researchers how to use process coding (uses gerunds, “-ing” words to capture action), in vivo coding (uses the actual words of the research participants/ subjects), descriptive coding (uses nouns to summarize the data topics), versus coding (uses “vs” to identify conflicts and power issues), and values coding (identifies participants’ values, attitudes, and/or beliefs). To exemplify some of these coding strategies, we include an excerpt from a transcript of a meeting of a school improvement committee. In this study, the collaborators were focused on building “school community.” This excerpt illustrates the application of a variety of codes described by Saldaña to this text:

To connect and elaborate the ideas developed in coding, Saldaña ( 2011 ) suggested researchers categorize the applied codes, write memos to deepen understandings and illuminate additional questions, and identify emergent themes. To begin the categorization process, Saldaña recommended all codes be “classified into similar clusters … once the codes have been classified, a category label is applied to them” (p. 97). So, in continuing with the study of school community example coded here, the researcher might create a cluster/category called “Value of Collaboration” and in this category might include the codes “relationships,” “building community,” and “effective strategies.”

Having coded and categorized a study’s various data forms, a typical next step for researchers is to write memos or analytic memos . Writing analytic memos allows the researcher(s) to

set in words your interpretation of the data … an analytic memo further articulates your … thinking processes on what things may mean … as the study proceeds, however, initial and substantive analytic memos can be revisited and revised for eventual integration into the report itself. (Saldaña, 2011 , p. 98)

In the study of student teaching from K–12 students’ perspectives (Trent & Zorko, 2006 ), we noticed throughout our analysis a series of focus group interview quotes coded “names.” The following quote from a high school student is representative of many others:

I think that, ah, they [student teachers] should like know your face and your name because, uh, I don’t like it if they don’t and they’ll just like … cause they’ll blow you off a lot easier if they don’t know, like our new principal is here … he is, like, he always, like, tries to make sure to say hi even to the, like, not popular people if you can call it that, you know, and I mean, yah, and the people that don’t usually socialize a lot, I mean he makes an effort to know them and know their name like so they will cooperate better with him.

Although we did not ask the focus groups a specific question about whether student teachers knew the K–12 students’ names, the topic came up in every focus group interview. We coded the above excerpt and the others “knowing names,” and these data were grouped with others under the category “relationships.” In an initial analytic memo about this, the researchers wrote,

STUDENT TEACHING STUDY—MEMO #3 “Knowing Names as Relationship Building” Most groups made unsolicited mentions of student teachers knowing, or not knowing, their names. We haven’t asked students about this, but it must be important to them because it always seems to come up. Students expected student teachers to know their names. When they did, students noticed and seemed pleased. When they didn’t, students seemed disappointed, even annoyed. An elementary student told us that early in the semester, “she knew our names … cause when we rose [sic] our hands, she didn’t have to come and look at our name tags … it made me feel very happy.” A high schooler, expressing displeasure that his student teacher didn’t know students’ names, told us, “They should like know your name because it shows they care about you as a person. I mean, we know their names, so they should take the time to learn ours too.” Another high school student said that even after 3 months, she wasn’t sure the student teacher knew her name. Another student echoed, “Same here.” Each of these students asserted that this (knowing students’ names) had impacted their relationship with the student teacher. This high school student focus group stressed that a good relationship, built early, directly impacts classroom interaction and student learning. A student explained it like this: “If you get to know each other, you can have fun with them … they seem to understand you more, you’re more relaxed, and learning seems easier.”

As noted in these brief examples, coding, categorizing, and writing memos about a study’s data are all accepted processes for data analysis and allow researchers to begin constructing new understandings and forming interpretations of the studied phenomena. We find the qualitative research literature to be particularly strong in offering support and guidance for researchers engaged in these analytic practices. In addition to those already noted in this chapter, we have found the following resources provide practical, yet theoretically grounded approaches to qualitative data analysis. For more detailed, procedural, or atomistic approaches to data analysis, we direct researchers to Miles and Huberman’s classic 1994 text, Qualitative Data Analysis , and Bernard et al.’s 2017 book Analyzing Qualitative Data: Systematic Approaches. For analysis and interpretation strategies falling somewhere between the atomistic and holistic poles, we suggest Hesse-Biber and Leavy’s ( 2011 ) chapter, “Analysis and Interpretation of Qualitative Data,” in their book, The Practice of Qualitative Research (second edition); Lichtman’s chapter, “Making Meaning From Your Data,” in her 2013 book Qualitative Research in Education: A User’s Guide (third edition); and “Processing Fieldnotes: Coding and Memoing,” a chapter in Emerson, Fretz, and Shaw’s ( 1995 ) book, Writing Ethnographic Fieldwork . Each of these sources succinctly describes the processes of data preparation, data reduction, coding and categorizing data, and writing memos about emergent ideas and findings. For more holistic approaches, we have found Denzin and Lincoln’s ( 2007 ) Collecting and Interpreting Qualitative Materials and Ellis and Bochner’s ( 2000 ) chapter “Autoethnography, Personal Narrative, Reflexivity” to both be very informative. Finally, Leavy’s 2017 book, Method Meets Art: Arts-Based Research Practice , provides support and guidance to researchers engaged in arts-based research.

Even after reviewing the multiple resources for treating data included here, qualitative researchers might still be wondering, But exactly how do we interpret? In the remainder of this section and in the concluding section of this chapter, we more concretely provide responses to this question and, in closing, we propose a framework for researchers to utilize as they engage in the complex, ambiguous, and yet exciting process of constructing meanings and new understandings from qualitative sources.

These meanings and understandings are often presented as theory, but theories in this sense should be viewed more as “guides to perception” as opposed to “devices that lead to the tight control or precise prediction of events” (Eisner, 1991 , p. 95). Perhaps Erickson’s ( 1986 ) concept of assertions is a more appropriate aim for qualitative researchers. He claimed that assertions are declarative statements; they include a summary of the new understandings, and they are supported by evidence/data. These assertions are open to revision and are revised when disconfirming evidence requires modification. Assertions, theories, or other explanations resulting from interpretation in research are typically presented as “findings” in written research reports. Belgrave and Smith ( 2002 ) emphasized the importance of these interpretations (as opposed to descriptions): “The core of the report is not the events reported by the respondent, but rather the subjective meaning of the reported events for the respondent” (p. 248).

Mills ( 2018 ) viewed interpretation as responding to the question, So what? He provided researchers a series of concrete strategies for both analysis and interpretation. Specific to interpretation, Mills (pp. 204–207) suggested a variety of techniques, including the following:

“ Extend the analysis ”: In doing so, researchers ask additional questions about the research. The data appear to say X , but could it be otherwise? In what ways do the data support emergent finding X ? And, in what ways do they not?

“ Connect findings with personal experience ”: Using this technique, researchers share interpretations based on their intimate knowledge of the context, the observed actions of the individuals in the studied context, and the data points that support emerging interpretations, as well as their awareness of discrepant events or outlier data. In a sense, the researcher is saying, “Based on my experiences in conducting this study, this is what I make of it all.”

“ Seek the advice of ‘critical’ friends ”: In doing so, researchers utilize trusted colleagues, fellow researchers, experts in the field of study, and others to offer insights, alternative interpretations, and the application of their own unique lenses to a researcher’s initial findings. We especially like this strategy because we acknowledge that, too often, qualitative interpretation is a “solo” affair.

“ Contextualize findings in the literature ”: This allows researchers to compare their interpretations to those of others writing about and studying the same/similar phenomena. The results of this contextualization may be that the current study’s findings correspond with the findings of other researchers. The results might, alternatively, differ from the findings of other researchers. In either instance, the researcher can highlight his or her unique contributions to our understanding of the topic under study.

“ Turn to theory ”: Mills defined theory as “an analytical and interpretive framework that helps the researcher make sense of ‘what is going on’ in the social setting being studied.” In turning to theory, researchers search for increasing levels of abstraction and move beyond purely descriptive accounts. Connecting to extant or generating new theory enables researchers to link their work to the broader contemporary issues in the field.

Other theorists offer additional advice for researchers engaged in the act of interpretation. Richardson ( 1995 ) reminded us to account for the power dynamics in the researcher–researched relationship and notes that, in doing so, we can allow for oppressed and marginalized voices to be heard in context. Bogdan and Biklen ( 2007 ) suggested that researchers engaged in interpretation revisit foundational writing about qualitative research, read studies related to the current research, ask evaluative questions (e.g., Is what I’m seeing here good or bad?), ask about implications of particular findings/interpretations, think about the audience for interpretations, look for stories and incidents that illustrate a specific finding/interpretation, and attempt to summarize key interpretations in a succinct paragraph. All these suggestions can be pertinent in certain situations and with particular methodological approaches. In the next and closing section of this chapter, we present a framework for interpretive strategies we believe will support, guide, and be applicable to qualitative researchers across multiple methodologies and paradigms.

In What Ways Can a Framework for Interpretation Strategies Support Qualitative Researchers across Multiple Methodological and Paradigmatic Views?

The process of qualitative research is often compared to a journey, one without a detailed itinerary and ending, but with general direction and aims and yet an open-endedness that adds excitement and thrives on curiosity. Qualitative researchers are travelers. They travel physically to field sites; they travel mentally through various epistemological, theoretical, and methodological grounds; they travel through a series of problem-finding, access, data collection, and data analysis processes; and, finally—the topic of this chapter—they travel through the process of making meaning of all this physical and cognitive travel via interpretation.

Although travel is an appropriate metaphor to describe the journey of qualitative researchers, we will also use “travel” to symbolize a framework for qualitative research interpretation strategies. By design, this framework applies across multiple paradigmatic, epistemological, and methodological traditions. The application of this framework is not formulaic or highly prescriptive; it is also not an anything-goes approach. It falls, and is applicable, between these poles, giving concrete (suggested) direction to qualitative researchers wanting to make the most of the interpretations that result from their research and yet allowing the necessary flexibility for researchers to employ the methods, theories, and approaches they deem most appropriate to the research problem(s) under study.

TRAVEL, a Comprehensive Approach to Qualitative Interpretation

In using the word TRAVEL as a mnemonic device, our aim is to highlight six essential concepts we argue all qualitative researchers should attend to in the interpretive process: transparency, reflexivity, analysis, validity, evidence, and literature. The importance of each is addressed here.

Transparency , as a research concept seems, well, transparent. But, too often, we read qualitative research reports and are left with many questions: How were research participants and the topic of study selected/excluded? How were the data collected, when, and for how long? Who analyzed and interpreted these data? A single researcher? Multiple? What interpretive strategies were employed? Are there data points that substantiate these interpretations/findings? What analytic procedures were used to organize the data prior to making the presented interpretations? In being transparent about data collection, analysis, and interpretation processes, researchers allow reviewers/readers insight into the research endeavor, and this transparency leads to credibility for both researcher and researcher’s claims. Altheide and Johnson ( 2011 ) explained,

There is great diversity of qualitative research.… While these approaches differ, they also share an ethical obligation to make public their claims, to show the reader, audience, or consumer why they should be trusted as faithful accounts of some phenomenon. (p. 584)

This includes, they noted, articulating

what the different sources of data were, how they were interwoven, and … how subsequent interpretations and conclusions are more or less closely tied to the various data … the main concern is that the connection be apparent, and to the extent possible, transparent. (p. 590)

In the Dreams as Data art and research project noted earlier, transparency was addressed in multiple ways. Readers of the project write-up were informed that interpretations resulting from the study, framed as themes , were a result of collaborative analysis that included insights from both students and instructor. Viewers of the art installation/data display had the rare opportunity to see all participant responses. In other words, viewers had access to the entire raw data set (see Trent, 2002 ). More frequently, we encounter only research “findings” already distilled, analyzed, and interpreted in research accounts, often by a single researcher. Allowing research consumers access to the data to interpret for themselves in the Dreams project was an intentional attempt at transparency.

Reflexivity , the second of our concepts for interpretive researcher consideration, has garnered a great deal of attention in qualitative research literature. Some have called this increased attention the reflexive turn (see, e.g., Denzin & Lincoln, 2004 ).

Although you can find many meanings for the term reflexivity, it is usually associated with a critical reflection on the practice and process of research and the role of the researcher. It concerns itself with the impact of the researcher on the system and the system on the researcher. It acknowledges the mutual relationships between the researcher and who and what is studied … by acknowledging the role of the self in qualitative research, the researcher is able to sort through biases and think about how they affect various aspects of the research, especially interpretation of meanings. (Lichtman, 2013 , p. 165)

As with transparency, attending to reflexivity allows researchers to attach credibility to presented findings. Providing a reflexive account of researcher subjectivity and the interactions of this subjectivity within the research process is a way for researchers to communicate openly with their audience. Instead of trying to exhume inherent bias from the process, qualitative researchers share with readers the value of having a specific, idiosyncratic positionality. As a result, situated, contextualized interpretations are viewed as an asset, as opposed to a liability.

LaBanca ( 2011 ), acknowledging the often solitary nature of qualitative research, called for researchers to engage others in the reflexive process. Like many other researchers, LaBanca utilized a researcher journal to chronicle reflexive thoughts, explorations, and understandings, but he took it a step farther. Realizing the value of others’ input, LaBanca posts his reflexive journal entries on a blog (what he calls an online reflexivity blog ) and invites critical friends, other researchers, and interested members of the community to audit his reflexive moves, providing insights, questions, and critique that inform his research and study interpretations.

We agree this is a novel approach worth considering. We, too, understand that multiple interpreters will undoubtedly produce multiple interpretations, a richness of qualitative research. So, we suggest researchers consider bringing others in before the production of the report. This could be fruitful in multiple stages of the inquiry process, but especially in the complex, idiosyncratic processes of reflexivity and interpretation. We are both educators and educational researchers. Historically, each of these roles has tended to be constructed as an isolated endeavor, the solitary teacher, the solo researcher/fieldworker. As noted earlier and in the analysis section that follows, introducing collaborative processes to what has often been a solitary activity offers much promise for generating rich interpretations that benefit from multiple perspectives.

Being consciously reflexive throughout our practice as researchers has benefitted us in many ways. In a study of teacher education curricula designed to prepare preservice teachers to support second-language learners, we realized hard truths that caused us to reflect on and adapt our own practices as teacher educators. Reflexivity can inform a researcher at all parts of the inquiry, even in early stages. For example, one of us was beginning a study of instructional practices in an elementary school. The communicated methods of the study indicated that the researcher would be largely an observer. Early fieldwork revealed that the researcher became much more involved as a participant than anticipated. Deep reflection and writing about the classroom interactions allowed the researcher to realize that the initial purpose of the research was not being accomplished, and the researcher believed he was having a negative impact on the classroom culture. Reflexivity in this instance prompted the researcher to leave the field and abandon the project as it was just beginning. Researchers should plan to openly engage in reflexive activities, including writing about their ongoing reflections and subjectivities. Including excerpts of this writing in research account supports our earlier recommendation of transparency.

Early in this chapter, for the purposes of discussion and examination, we defined analysis as “summarizing and organizing” data in a qualitative study and interpretation as “meaning making.” Although our focus has been on interpretation as the primary topic, the importance of good analysis cannot be underestimated, because without it, resultant interpretations are likely incomplete and potentially uninformed. Comprehensive analysis puts researchers in a position to be deeply familiar with collected data and to organize these data into forms that lead to rich, unique interpretations, and yet interpretations that are clearly connected to data exemplars. Although we find it advantageous to examine analysis and interpretation as different but related practices, in reality, the lines blur as qualitative researchers engage in these recursive processes.

We earlier noted our affinity for a variety of approaches to analysis (see, e.g., Hesse-Biber & Leavy, 2011 ; Lichtman, 2013 ; or Saldaña, 2011 ). Emerson et al. ( 1995 ) presented a grounded approach to qualitative data analysis: In early stages, researchers engage in a close, line-by-line reading of data/collected text and accompany this reading with open coding , a process of categorizing and labeling the inquiry data. Next, researchers write initial memos to describe and organize the data under analysis. These analytic phases allow the researcher(s) to prepare, organize, summarize, and understand the data, in preparation for the more interpretive processes of focused coding and the writing up of interpretations and themes in the form of integrative memos .

Similarly, Mills ( 2018 ) provided guidance on the process of analysis for qualitative action researchers. His suggestions for organizing and summarizing data include coding (labeling data and looking for patterns); identifying themes by considering the big picture while looking for recurrent phrases, descriptions, or topics; asking key questions about the study data (who, what, where, when, why, and how); developing concept maps (graphic organizers that show initial organization and relationships in the data); and stating what’s missing by articulating what data are not present (pp. 179–189).

Many theorists, like Emerson et al. ( 1995 ) and Mills ( 2018 ) noted here, provide guidance for individual researchers engaged in individual data collection, analysis, and interpretation; others, however, invite us to consider the benefits of collaboratively engaging in these processes through the use of collaborative research and analysis teams. Paulus, Woodside, and Ziegler ( 2008 ) wrote about their experiences in collaborative qualitative research: “Collaborative research often refers to collaboration among the researcher and the participants. Few studies investigate the collaborative process among researchers themselves” (p. 226).

Paulus et al. ( 2008 ) claimed that the collaborative process “challenged and transformed our assumptions about qualitative research” (p. 226). Engaging in reflexivity, analysis, and interpretation as a collaborative enabled these researchers to reframe their views about the research process, finding that the process was much more recursive, as opposed to following a linear progression. They also found that cooperatively analyzing and interpreting data yielded “collaboratively constructed meanings” as opposed to “individual discoveries.” And finally, instead of the traditional “individual products” resulting from solo research, collaborative interpretation allowed researchers to participate in an “ongoing conversation” (p. 226).

These researchers explained that engaging in collaborative analysis and interpretation of qualitative data challenged their previously held assumptions. They noted,

through collaboration, procedures are likely to be transparent to the group and can, therefore, be made public. Data analysis benefits from an iterative, dialogic, and collaborative process because thinking is made explicit in a way that is difficult to replicate as a single researcher. (Paulus et al., 2008 , p. 236)

They shared that, during the collaborative process, “we constantly checked our interpretation against the text, the context, prior interpretations, and each other’s interpretations” (p. 234).

We, too, have engaged in analysis similar to these described processes, including working on research teams. We encourage other researchers to find processes that fit with the methodology and data of a particular study, use the techniques and strategies most appropriate, and then cite the utilized authority to justify the selected path. We urge traditionally solo researchers to consider trying a collaborative approach. Generally, we suggest researchers be familiar with a wide repertoire of practices. In doing so, they will be in better positions to select and use strategies most appropriate for their studies and data. Succinctly preparing, organizing, categorizing, and summarizing data sets the researcher(s) up to construct meaningful interpretations in the forms of assertions, findings, themes, and theories.

Researchers want their findings to be sound, backed by evidence, and justifiable and to accurately represent the phenomena under study. In short, researchers seek validity for their work. We assert that qualitative researchers should attend to validity concepts as a part of their interpretive practices. We have previously written and theorized about validity, and, in doing so, we have highlighted and labeled what we consider two distinctly different approaches, transactional and transformational (Cho & Trent, 2006 ). We define transactional validity in qualitative research as an interactive process occurring among the researcher, the researched, and the collected data, one that is aimed at achieving a relatively higher level of accuracy. Techniques, methods, and/or strategies are employed during the conduct of the inquiry. These techniques, such as member checking and triangulation, are seen as a medium with which to ensure an accurate reflection of reality (or, at least, participants’ constructions of reality). Lincoln and Guba’s ( 1985 ) widely known notion of trustworthiness in “naturalistic inquiry” is grounded in this approach. In seeking trustworthiness, researchers attend to research credibility, transferability, dependability, and confirmability. Validity approaches described by Maxwell ( 1992 ) as “descriptive” and “interpretive” also proceed in the usage of transactional processes.

For example, in the write-up of a study on the facilitation of teacher research, one of us (Trent, 2012 ) wrote about the use of transactional processes:

“Member checking is asking the members of the population being studied for their reaction to the findings” (Sagor, 2000 , p. 136). Interpretations and findings of this research, in draft form, were shared with teachers (for member checking) on multiple occasions throughout the study. Additionally, teachers reviewed and provided feedback on the final draft of this article. (p. 44)

This member checking led to changes in some resultant interpretations (called findings in this particular study) and to adaptations of others that shaped these findings in ways that made them both richer and more contextualized.

Alternatively, in transformational approaches, validity is not so much something that can be achieved solely by employing certain techniques. Transformationalists assert that because traditional or positivist inquiry is no longer seen as an absolute means to truth in the realm of human science, alternative notions of validity should be considered to achieve social justice, deeper understandings, broader visions, and other legitimate aims of qualitative research. In this sense, it is the ameliorative aspects of the research that achieve (or do not achieve) its validity. Validity is determined by the resultant actions prompted by the research endeavor.

Lather ( 1993 ), Richardson ( 1997 ), and others (e.g., Lenzo, 1995 ; Scheurich, 1996 ) proposed a transgressive approach to validity that emphasized a higher degree of self-reflexivity. For example, Lather proposed a “catalytic validity” described as “the degree to which the research empowers and emancipates the research subjects” (Scheurich, 1996 , p. 4). Beverley ( 2000 , p. 556) proposed testimonio as a qualitative research strategy. These first-person narratives find their validity in their ability to raise consciousness and thus provoke political action to remedy problems of oppressed peoples (e.g., poverty, marginality, exploitation).

We, too, have pursued research with transformational aims. In the earlier mentioned study of preservice teachers’ experiences learning to teach second-language learners (Cho et al., 2012 ), our aims were to empower faculty members, evolve the curriculum, and, ultimately, better serve preservice teachers so that they might better serve English-language learners in their classrooms. As program curricula and activities have changed as a result, we claim a degree of transformational validity for this research.

Important, then, for qualitative researchers throughout the inquiry, but especially when engaged in the process of interpretation, is to determine the type(s) of validity applicable to the study. What are the aims of the study? Providing an “accurate” account of studied phenomena? Empowering participants to take action for themselves and others? The determination of this purpose will, in turn, inform researchers’ analysis and interpretation of data. Understanding and attending to the appropriate validity criteria will bolster researcher claims to meaningful findings and assertions.

Regardless of purpose or chosen validity considerations, qualitative research depends on evidence . Researchers in different qualitative methodologies rely on different types of evidence to support their claims. Qualitative researchers typically utilize a variety of forms of evidence including texts (written notes, transcripts, images, etc.), audio and video recordings, cultural artifacts, documents related to the inquiry, journal entries, and field notes taken during observations of social contexts and interactions. Schwandt ( 2001 ) wrote,

Evidence is essential to justification, and justification takes the form of an argument about the merit(s) of a given claim. It is generally accepted that no evidence is conclusive or unassailable (and hence, no argument is foolproof). Thus, evidence must often be judged for its credibility, and that typically means examining its source and the procedures by which it was produced [thus the need for transparency discussed earlier]. (p. 82)

Altheide and Johnson ( 2011 ) drew a distinction between evidence and facts:

Qualitative researchers distinguish evidence from facts. Evidence and facts are similar but not identical. We can often agree on facts, e.g., there is a rock, it is harder than cotton candy. Evidence involves an assertion that some facts are relevant to an argument or claim about a relationship. Since a position in an argument is likely tied to an ideological or even epistemological position, evidence is not completely bound by facts, but it is more problematic and subject to disagreement. (p. 586)

Inquirers should make every attempt to link evidence to claims (or findings, interpretations, assertions, conclusions, etc.). There are many strategies for making these connections. Induction involves accumulating multiple data points to infer a general conclusion. Confirmation entails directly linking evidence to resultant interpretations. Testability/falsifiability means illustrating that evidence does not necessarily contradict the claim/interpretation and so increases the credibility of the claim (Schwandt, 2001 ). In the study about learning to teach second-language learners, for example, a study finding (Cho et al., 2012 ) was that “as a moral claim , candidates increasingly [in higher levels of the teacher education program] feel more responsible and committed to … [English language learners]” (p. 77). We supported this finding with a series of data points that included the following preservice teacher response: “It is as much the responsibility of the teacher to help teach second-language learners the English language as it is our responsibility to teach traditional English speakers to read or correctly perform math functions.” Claims supported by evidence allow readers to see for themselves and to both examine researcher assertions in tandem with evidence and form further interpretations of their own.

Some postmodernists reject the notion that qualitative interpretations are arguments based on evidence. Instead, they argue that qualitative accounts are not intended to faithfully represent that experience, but instead are designed to evoke some feelings or reactions in the reader of the account (Schwandt, 2001 ). We argue that, even in these instances where transformational validity concerns take priority over transactional processes, evidence still matters. Did the assertions accomplish the evocative aims? What evidence/arguments were used to evoke these reactions? Does the presented claim correspond with the study’s evidence? Is the account inclusive? In other words, does it attend to all evidence or selectively compartmentalize some data while capitalizing on other evidentiary forms?

Researchers, we argue, should be both transparent and reflexive about these questions and, regardless of research methodology or purpose, should share with readers of the account their evidentiary moves and aims. Altheide and Johnson ( 2011 ) called this an evidentiary narrative and explain:

Ultimately, evidence is bound up with our identity in a situation.… An “evidentiary narrative” emerges from a reconsideration of how knowledge and belief systems in everyday life are tied to epistemic communities that provide perspectives, scenarios, and scripts that reflect symbolic and social moral orders. An “evidentiary narrative” symbolically joins an actor, an audience, a point of view (definition of a situation), assumptions, and a claim about a relationship between two or more phenomena. If any of these factors are not part of the context of meaning for a claim, it will not be honored, and thus, not seen as evidence. (p. 686)

In sum, readers/consumers of a research account deserve to know how evidence was treated and viewed in an inquiry. They want and should be aware of accounts that aim to evoke versus represent, and then they can apply their own criteria (including the potential transferability to their situated context). Renowned ethnographer and qualitative research theorist Harry Wolcott ( 1990 ) urged researchers to “let readers ‘see’ for themselves” by providing more detail rather than less and by sharing primary data/evidence to support interpretations. In the end, readers do not expect perfection. Writer Eric Liu ( 2010 ) explained, “We don’t expect flawless interpretation. We expect good faith. We demand honesty.”

Last, in this journey through concepts we assert are pertinent to researchers engaged in interpretive processes, we include attention to the literature . In discussing literature, qualitative researchers typically mean publications about the prior research conducted on topics aligned with or related to a study. Most often, this research/literature is reviewed and compiled by researchers in a section of the research report titled “Literature Review.” It is here we find others’ studies, methods, and theories related to our topics of study, and it is here we hope the assertions and theories that result from our studies will someday reside.

We acknowledge the value of being familiar with research related to topics of study. This familiarity can inform multiple phases of the inquiry process. Understanding the extant knowledge base can inform research questions and topic selection, data collection and analysis plans, and the interpretive process. In what ways do the interpretations from this study correspond with other research conducted on this topic? Do findings/interpretations corroborate, expand, or contradict other researchers’ interpretations of similar phenomena? In any of these scenarios (correspondence, expansion, contradiction), new findings and interpretations from a study add to and deepen the knowledge base, or literature, on a topic of investigation.

For example, in our literature review for the study of student teaching, we quickly determined that the knowledge base and extant theories related to the student teaching experience were immense, but also quickly realized that few, if any, studies had examined student teaching from the perspective of the K–12 students who had the student teachers. This focus on the literature related to our topic of student teaching prompted us to embark on a study that would fill a gap in this literature: Most of the knowledge base focused on the experiences and learning of the student teachers themselves. Our study, then, by focusing on the K–12 students’ perspectives, added literature/theories/assertions to a previously untapped area. The “literature” in this area (at least we would like to think) is now more robust as a result.

In another example, a research team (Trent et al., 2003 ) focused on institutional diversity efforts, mined the literature, found an appropriate existing (a priori) set of theories/assertions, and then used the existing theoretical framework from the literature as a framework to analyze data, in this case, a variety of institutional activities related to diversity.

Conducting a literature review to explore extant theories on a topic of study can serve a variety of purposes. As evidenced in these examples, consulting the literature/extant theory can reveal gaps in the literature. A literature review might also lead researchers to existing theoretical frameworks that support analysis and interpretation of their data (as in the use of the a priori framework example). Finally, a review of current theories related to a topic of inquiry might confirm that much theory already exists, but that further study may add to, bolster, and/or elaborate on the current knowledge base.

Guidance for researchers conducting literature reviews is plentiful. Lichtman ( 2013 ) suggested researchers conduct a brief literature review, begin research, and then update and modify the literature review as the inquiry unfolds. She suggested reviewing a wide range of related materials (not just scholarly journals) and additionally suggested that researchers attend to literature on methodology, not just the topic of study. She also encouraged researchers to bracket and write down thoughts on the research topic as they review the literature, and, important for this chapter, that researchers “integrate your literature review throughout your writing rather than using a traditional approach of placing it in a separate chapter” (p. 173).

We agree that the power of a literature review to provide context for a study can be maximized when this information is not compartmentalized apart from a study’s findings. Integrating (or at least revisiting) reviewed literature juxtaposed alongside findings can illustrate how new interpretations add to an evolving story. Eisenhart ( 1998 ) expanded the traditional conception of the literature review and discussed the concept of an interpretive review . By taking this interpretive approach, Eisenhart claimed that reviews, alongside related interpretations/findings on a specific topic, have the potential to allow readers to see the studied phenomena in entirely new ways, through new lenses, revealing heretofore unconsidered perspectives. Reviews that offer surprising and enriching perspectives on meanings and circumstances “shake things up, break down boundaries, and cause things (or thinking) to expand” (p. 394). Coupling reviews of this sort with current interpretations will “give us stories that startle us with what we have failed to notice” (p. 395).

In reviews of research studies, it can certainly be important to evaluate the findings in light of established theories and methods [the sorts of things typically included in literature reviews]. However, it also seems important to ask how well the studies disrupt conventional assumptions and help us to reconfigure new, more inclusive, and more promising perspectives on human views and actions. From an interpretivist perspective, it would be most important to review how well methods and findings permit readers to grasp the sense of unfamiliar perspectives and actions. (Eisenhart, 1998 , p. 397)

Though our interpretation-related journey in this chapter nears an end, we are hopeful it is just the beginning of multiple new conversations among ourselves and in concert with other qualitative researchers. Our aims have been to circumscribe interpretation in qualitative research; emphasize the importance of interpretation in achieving the aims of the qualitative project; discuss the interactions of methodology, data, and the researcher/self as these concepts and theories intertwine with interpretive processes; describe some concrete ways that qualitative inquirers engage the process of interpretation; and, finally, provide a framework of interpretive strategies that may serve as a guide for ourselves and other researchers.

In closing, we note that the TRAVEL framework, construed as a journey to be undertaken by researchers engaged in interpretive processes, is not designed to be rigid or prescriptive, but instead is designed to be a flexible set of concepts that will inform researchers across multiple epistemological, methodological, and theoretical paradigms. We chose the concepts of transparency, reflexivity, analysis, validity, evidence, and literature (TRAVEL) because they are applicable to the infinite journeys undertaken by qualitative researchers who have come before and to those who will come after us. As we journeyed through our interpretations of interpretation, we have discovered new things about ourselves and our work. We hope readers also garner insights that enrich their interpretive excursions. Happy travels!

Altheide, D. , & Johnson, J. M. ( 2011 ). Reflections on interpretive adequacy in qualitative research. In N. M. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (pp. 595–610). Thousand Oaks, CA: Sage.

Google Scholar

Google Preview

Barrett, J. ( 2007 ). The researcher as instrument: Learning to conduct qualitative research through analyzing and interpreting a choral rehearsal.   Music Education Research, 9, 417–433.

Barrett, T. ( 2011 ). Criticizing art: Understanding the contemporary (3rd ed.). New York, NY: McGraw–Hill.

Belgrave, L. L. , & Smith, K. J. ( 2002 ). Negotiated validity in collaborative ethnography. In N. M. Denzin & Y. S. Lincoln (Eds.), The qualitative inquiry reader (pp. 233–255). Thousand Oaks, CA: Sage.

Bernard, H. R. , Wutich, A. , & Ryan, G. W. ( 2017 ). Analyzing qualitative data: Systematic approaches (2nd ed.). Thousand Oaks, CA: Sage.

Beverly, J. ( 2000 ). Testimonio, subalternity, and narrative authority. In N. M. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 555–566). Thousand Oaks, CA: Sage.

Bogdan, R. C. , & Biklen, S. K. ( 2007 ). Qualitative research for education: An introduction to theories and methods (5th ed.). Boston, MA: Allyn & Bacon.

Cho, J. , Rios, F. , Trent, A. , & Mayfield, K. ( 2012 ). Integrating language diversity into teacher education curricula in a rural context: Candidates’ developmental perspectives and understandings.   Teacher Education Quarterly, 39(2), 63–85.

Cho, J. , & Trent, A. ( 2006 ). Validity in qualitative research revisited.   QR—Qualitative Research Journal, 6, 319–340.

Denzin, N. M. , & Lincoln, Y. S . (Eds.). ( 2004 ). Handbook of qualitative research . Newbury Park, CA: Sage.

Denzin, N. M. , & Lincoln, Y. S. ( 2007 ). Collecting and interpreting qualitative materials . Thousand Oaks, CA: Sage.

Eisenhart, M. ( 1998 ). On the subject of interpretive reviews.   Review of Educational Research, 68, 391–393.

Eisner, E. ( 1991 ). The enlightened eye: Qualitative inquiry and the enhancement of educational practice . New York, NY: Macmillan.

Ellingson, L. L. ( 2011 ). Analysis and representation across the continuum. In N. M. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (pp. 595–610). Thousand Oaks, CA: Sage.

Ellis, C. , & Bochner, A. P. ( 2000 ). Autoethnography, personal narrative, reflexivity: Researcher as subject. In N. M. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 733–768). Thousand Oaks, CA: Sage.

Emerson, R. , Fretz, R. , & Shaw, L. ( 1995 ). Writing ethnographic fieldwork . Chicago, IL: University of Chicago Press.

Erickson, F. ( 1986 ). Qualitative methods in research in teaching and learning. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp 119–161). New York, NY: Macmillan.

Glaser, B. ( 1965 ). The constant comparative method of qualitative analysis.   Social Problems, 12, 436–445.

Gubrium, J. F. , & Holstein, J. A. ( 2000 ). Analyzing interpretive practice. In N. M. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 487–508). Thousand Oaks, CA: Sage.

Hammersley, M. ( 2013 ). What is qualitative research? London, England: Bloomsbury Academic.

Hesse-Biber, S. N. ( 2017 ). The practice of qualitative research (3rd ed.). Thousand Oaks, CA: Sage.

Hesse-Biber, S. N. , & Leavy, P. ( 2011 ). The practice of qualitative research (2nd ed.). Thousand Oaks, CA: Sage.

Hubbard, R. S. , & Power, B. M. ( 2003 ). The art of classroom inquiry: A handbook for teacher researchers . Portsmouth, NH: Heinemann.

Husserl, E. ( 1913 /1962). Ideas: general introduction to pure phenomenology (W. R. Boyce Gibson, Trans.). London, England: Collier.

LaBanca, F. ( 2011 ). Online dynamic asynchronous audit strategy for reflexivity in the qualitative paradigm.   Qualitative Report, 16, 1160–1171.

Lather, P. ( 1993 ). Fertile obsession: Validity after poststructuralism.   Sociological Quarterly, 34, 673–693.

Leavy, P. ( 2017 ). Method meets art: Arts-based research practice (2nd ed.). New York, NY: Guilford Press.

Lenzo, K. ( 1995 ). Validity and self reflexivity meet poststructuralism: Scientific ethos and the transgressive self.   Educational Researcher, 24(4), 17–23, 45.

Lichtman, M. ( 2013 ). Qualitative research in education: A user’s guide (3rd ed.). Thousand Oaks, CA: Sage.

Lincoln, Y. S. , & Guba, E. G. ( 1985 ). Naturalistic inquiry . Beverly Hills, CA: Sage.

Liu, E. (2010). The real meaning of balls and strikes . Retrieved from http://www.huffingtonpost.com/eric-liu/the-real-meaning-of-balls_b_660915.html

Maxwell, J. ( 1992 ). Understanding and validity in qualitative research.   Harvard Educational Review, 62, 279–300.

Miles, M. B. , & Huberman, A. M. ( 1994 ). Qualitative data analysis . Thousand Oaks, CA: Sage.

Mills, G. E. ( 2018 ). Action research: A guide for the teacher researcher (6th ed.). New York, NY: Pearson.

Olivier de Sardan, J. P. ( 2015 ). Epistemology, fieldwork, and anthropology. New York, NY: Palgrave Macmillan.

Packer, M. J. ( 2018 ). The science of qualitative research (2nd ed.). Cambridge, England: Cambridge University Press.

Paulus, T. , Woodside, M. , & Ziegler, M. ( 2008 ). Extending the conversation: Qualitative research as dialogic collaborative process.   Qualitative Report, 13, 226–243.

Richardson, L. ( 1995 ). Writing stories: Co-authoring the “sea monster,” a writing story.   Qualitative Inquiry, 1, 189–203.

Richardson, L. ( 1997 ). Fields of play: Constructing an academic life . New Brunswick, NJ: Rutgers University Press.

Sagor, R. ( 2000 ). Guiding school improvement with action research . Alexandria, VA: ASCD.

Saldaña, J. ( 2011 ). Fundamentals of qualitative research . New York, NY: Oxford University Press.

Scheurich, J. ( 1996 ). The masks of validity: A deconstructive investigation.   Qualitative Studies in Education, 9, 49–60.

Schwandt, T. A. ( 2001 ). Dictionary of qualitative inquiry . Thousand Oaks, CA: Sage.

Slotnick, R. C. , & Janesick, V. J. ( 2011 ). Conversations on method: Deconstructing policy through the researcher reflective journal.   Qualitative Report, 16, 1352–1360.

Trent, A. ( 2002 ). Dreams as data: Art installation as heady research,   Teacher Education Quarterly, 29(4), 39–51.

Trent, A. ( 2012 ). Action research on action research: A facilitator’s account.   Action Learning and Action Research Journal, 18, 35–67.

Trent, A. , Rios, F. , Antell, J. , Berube, W. , Bialostok, S. , Cardona, D. , … Rush, T. ( 2003 ). Problems and possibilities in the pursuit of diversity: An institutional analysis.   Equity & Excellence, 36, 213–224.

Trent, A. , & Zorko, L. ( 2006 ). Listening to students: “New” perspectives on student teaching.   Teacher Education & Practice, 19, 55–70.

Willig, C. ( 2017 ). Interpretation in qualitative research. In C. Willig & W. Stainton-Rogers (Eds.), The Sage handbook of qualitative research in psychology (2nd ed., pp. 267–290). London, England: Sage.

Willis, J. W. ( 2007 ). Foundations of qualitative research: Interpretive and critical approaches . Thousand Oaks, CA: Sage.

Wolcott, H. ( 1990 ). On seeking-and rejecting-validity in qualitative research. In E. Eisner & A. Peshkin (Eds.), Qualitative inquiry in education: The continuing debate (pp. 121–152). New York, NY: Teachers College Press.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Data Collection, Presentation and Analysis

  • First Online: 25 May 2023

Cite this chapter

what is data presentation and interpretation in research

  • Uche M. Mbanaso 4 ,
  • Lucienne Abrahams 5 &
  • Kennedy Chinedu Okafor 6  

571 Accesses

This chapter covers the topics of data collection, data presentation and data analysis. It gives attention to data collection for studies based on experiments, on data derived from existing published or unpublished data sets, on observation, on simulation and digital twins, on surveys, on interviews and on focus group discussions. One of the interesting features of this chapter is the section dealing with using measurement scales in quantitative research, including nominal scales, ordinal scales, interval scales and ratio scales. It explains key facets of qualitative research including ethical clearance requirements. The chapter discusses the importance of data visualization as key to effective presentation of data, including tabular forms, graphical forms and visual charts such as those generated by Atlas.ti analytical software.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

Abdullah, M. F., & Ahmad, K. (2013). The mapping process of unstructured data to structured data. Proceedings of the 2013 International Conference on Research and Innovation in Information Systems (ICRIIS) , Malaysia , 151–155. https://doi.org/10.1109/ICRIIS.2013.6716700

Adnan, K., & Akbar, R. (2019). An analytical study of information extraction from unstructured and multidimensional big data. Journal of Big Data, 6 , 91. https://doi.org/10.1186/s40537-019-0254-8

Article   Google Scholar  

Alsheref, F. K., & Fattoh, I. E. (2020). Medical text annotation tool based on IBM Watson Platform. Proceedings of the 2020 6th international conference on advanced computing and communication systems (ICACCS) , India , 1312–1316. https://doi.org/10.1109/ICACCS48705.2020.9074309

Cinque, M., Cotroneo, D., Della Corte, R., & Pecchia, A. (2014). What logs should you look at when an application fails? Insights from an industrial case study. Proceedings of the 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks , USA , 690–695. https://doi.org/10.1109/DSN.2014.69

Gideon, L. (Ed.). (2012). Handbook of survey methodology for the social sciences . Springer.

Google Scholar  

Leedy, P., & Ormrod, J. (2015). Practical research planning and design (12th ed.). Pearson Education.

Madaan, A., Wang, X., Hall, W., & Tiropanis, T. (2018). Observing data in IoT worlds: What and how to observe? In Living in the Internet of Things: Cybersecurity of the IoT – 2018 (pp. 1–7). https://doi.org/10.1049/cp.2018.0032

Chapter   Google Scholar  

Mahajan, P., & Naik, C. (2019). Development of integrated IoT and machine learning based data collection and analysis system for the effective prediction of agricultural residue/biomass availability to regenerate clean energy. Proceedings of the 2019 9th International Conference on Emerging Trends in Engineering and Technology – Signal and Information Processing (ICETET-SIP-19) , India , 1–5. https://doi.org/10.1109/ICETET-SIP-1946815.2019.9092156 .

Mahmud, M. S., Huang, J. Z., Salloum, S., Emara, T. Z., & Sadatdiynov, K. (2020). A survey of data partitioning and sampling methods to support big data analysis. Big Data Mining and Analytics, 3 (2), 85–101. https://doi.org/10.26599/BDMA.2019.9020015

Miswar, S., & Kurniawan, N. B. (2018). A systematic literature review on survey data collection system. Proceedings of the 2018 International Conference on Information Technology Systems and Innovation (ICITSI) , Indonesia , 177–181. https://doi.org/10.1109/ICITSI.2018.8696036

Mosina, C. (2020). Understanding the diffusion of the internet: Redesigning the global diffusion of the internet framework (Research report, Master of Arts in ICT Policy and Regulation). LINK Centre, University of the Witwatersrand. https://hdl.handle.net/10539/30723

Nkamisa, S. (2021). Investigating the integration of drone management systems to create an enabling remote piloted aircraft regulatory environment in South Africa (Research report, Master of Arts in ICT Policy and Regulation). LINK Centre, University of the Witwatersrand. https://hdl.handle.net/10539/33883

QuestionPro. (2020). Survey research: Definition, examples and methods . https://www.questionpro.com/article/survey-research.html

Rajanikanth, J. & Kanth, T. V. R. (2017). An explorative data analysis on Bangalore City Weather with hybrid data mining techniques using R. Proceedings of the 2017 International Conference on Current Trends in Computer, Electrical, Electronics and Communication (CTCEEC) , India , 1121-1125. https://doi/10.1109/CTCEEC.2017.8455008

Rao, R. (2003). From unstructured data to actionable intelligence. IT Professional, 5 , 29–35. https://www.researchgate.net/publication/3426648_From_Unstructured_Data_to_Actionable_Intelligence

Schulze, P. (2009). Design of the research instrument. In P. Schulze (Ed.), Balancing exploitation and exploration: Organizational antecedents and performance effects of innovation strategies (pp. 116–141). Gabler. https://doi.org/10.1007/978-3-8349-8397-8_6

Usanov, A. (2015). Assessing cybersecurity: A meta-analysis of threats, trends and responses to cyber attacks . The Hague Centre for Strategic Studies. https://www.researchgate.net/publication/319677972_Assessing_Cyber_Security_A_Meta-analysis_of_Threats_Trends_and_Responses_to_Cyber_Attacks

Van de Kaa, G., De Vries, H. J., van Heck, E., & van den Ende, J. (2007). The emergence of standards: A meta-analysis. Proceedings of the 2007 40th Annual Hawaii International Conference on Systems Science (HICSS’07) , USA , 173a–173a. https://doi.org/10.1109/HICSS.2007.529

Download references

Author information

Authors and affiliations.

Centre for Cybersecurity Studies, Nasarawa State University, Keffi, Nigeria

Uche M. Mbanaso

LINK Centre, University of the Witwatersrand, Johannesburg, South Africa

Lucienne Abrahams

Department of Mechatronics Engineering, Federal University of Technology, Owerri, Nigeria

Kennedy Chinedu Okafor

You can also search for this author in PubMed   Google Scholar

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Mbanaso, U.M., Abrahams, L., Okafor, K.C. (2023). Data Collection, Presentation and Analysis. In: Research Techniques for Computer Science, Information Systems and Cybersecurity. Springer, Cham. https://doi.org/10.1007/978-3-031-30031-8_7

Download citation

DOI : https://doi.org/10.1007/978-3-031-30031-8_7

Published : 25 May 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-30030-1

Online ISBN : 978-3-031-30031-8

eBook Packages : Engineering Engineering (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

what is data presentation and interpretation in research

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection methods , and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

When I think of “disconnected”, it is important that this is not just in relation to people analytics, Employee Experience or Customer Experience - it is also relevant to looking across them.

I Am Disconnected – Tuesday CX Thoughts

May 21, 2024

Customer success tools

20 Best Customer Success Tools of 2024

May 20, 2024

AI-Based Services in Market Research

AI-Based Services Buying Guide for Market Research (based on ESOMAR’s 20 Questions) 

data information vs insight

Data Information vs Insight: Essential differences

May 14, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Am J Pharm Educ
  • v.74(8); 2010 Oct 11

Presenting and Evaluating Qualitative Research

The purpose of this paper is to help authors to think about ways to present qualitative research papers in the American Journal of Pharmaceutical Education . It also discusses methods for reviewers to assess the rigour, quality, and usefulness of qualitative research. Examples of different ways to present data from interviews, observations, and focus groups are included. The paper concludes with guidance for publishing qualitative research and a checklist for authors and reviewers.

INTRODUCTION

Policy and practice decisions, including those in education, increasingly are informed by findings from qualitative as well as quantitative research. Qualitative research is useful to policymakers because it often describes the settings in which policies will be implemented. Qualitative research is also useful to both pharmacy practitioners and pharmacy academics who are involved in researching educational issues in both universities and practice and in developing teaching and learning.

Qualitative research involves the collection, analysis, and interpretation of data that are not easily reduced to numbers. These data relate to the social world and the concepts and behaviors of people within it. Qualitative research can be found in all social sciences and in the applied fields that derive from them, for example, research in health services, nursing, and pharmacy. 1 It looks at X in terms of how X varies in different circumstances rather than how big is X or how many Xs are there? 2 Textbooks often subdivide research into qualitative and quantitative approaches, furthering the common assumption that there are fundamental differences between the 2 approaches. With pharmacy educators who have been trained in the natural and clinical sciences, there is often a tendency to embrace quantitative research, perhaps due to familiarity. A growing consensus is emerging that sees both qualitative and quantitative approaches as useful to answering research questions and understanding the world. Increasingly mixed methods research is being carried out where the researcher explicitly combines the quantitative and qualitative aspects of the study. 3 , 4

Like healthcare, education involves complex human interactions that can rarely be studied or explained in simple terms. Complex educational situations demand complex understanding; thus, the scope of educational research can be extended by the use of qualitative methods. Qualitative research can sometimes provide a better understanding of the nature of educational problems and thus add to insights into teaching and learning in a number of contexts. For example, at the University of Nottingham, we conducted in-depth interviews with pharmacists to determine their perceptions of continuing professional development and who had influenced their learning. We also have used a case study approach using observation of practice and in-depth interviews to explore physiotherapists' views of influences on their leaning in practice. We have conducted in-depth interviews with a variety of stakeholders in Malawi, Africa, to explore the issues surrounding pharmacy academic capacity building. A colleague has interviewed and conducted focus groups with students to explore cultural issues as part of a joint Nottingham-Malaysia pharmacy degree program. Another colleague has interviewed pharmacists and patients regarding their expectations before and after clinic appointments and then observed pharmacist-patient communication in clinics and assessed it using the Calgary Cambridge model in order to develop recommendations for communication skills training. 5 We have also performed documentary analysis on curriculum data to compare pharmacist and nurse supplementary prescribing courses in the United Kingdom.

It is important to choose the most appropriate methods for what is being investigated. Qualitative research is not appropriate to answer every research question and researchers need to think carefully about their objectives. Do they wish to study a particular phenomenon in depth (eg, students' perceptions of studying in a different culture)? Or are they more interested in making standardized comparisons and accounting for variance (eg, examining differences in examination grades after changing the way the content of a module is taught). Clearly a quantitative approach would be more appropriate in the last example. As with any research project, a clear research objective has to be identified to know which methods should be applied.

Types of qualitative data include:

  • Audio recordings and transcripts from in-depth or semi-structured interviews
  • Structured interview questionnaires containing substantial open comments including a substantial number of responses to open comment items.
  • Audio recordings and transcripts from focus group sessions.
  • Field notes (notes taken by the researcher while in the field [setting] being studied)
  • Video recordings (eg, lecture delivery, class assignments, laboratory performance)
  • Case study notes
  • Documents (reports, meeting minutes, e-mails)
  • Diaries, video diaries
  • Observation notes
  • Press clippings
  • Photographs

RIGOUR IN QUALITATIVE RESEARCH

Qualitative research is often criticized as biased, small scale, anecdotal, and/or lacking rigor; however, when it is carried out properly it is unbiased, in depth, valid, reliable, credible and rigorous. In qualitative research, there needs to be a way of assessing the “extent to which claims are supported by convincing evidence.” 1 Although the terms reliability and validity traditionally have been associated with quantitative research, increasingly they are being seen as important concepts in qualitative research as well. Examining the data for reliability and validity assesses both the objectivity and credibility of the research. Validity relates to the honesty and genuineness of the research data, while reliability relates to the reproducibility and stability of the data.

The validity of research findings refers to the extent to which the findings are an accurate representation of the phenomena they are intended to represent. The reliability of a study refers to the reproducibility of the findings. Validity can be substantiated by a number of techniques including triangulation use of contradictory evidence, respondent validation, and constant comparison. Triangulation is using 2 or more methods to study the same phenomenon. Contradictory evidence, often known as deviant cases, must be sought out, examined, and accounted for in the analysis to ensure that researcher bias does not interfere with or alter their perception of the data and any insights offered. Respondent validation, which is allowing participants to read through the data and analyses and provide feedback on the researchers' interpretations of their responses, provides researchers with a method of checking for inconsistencies, challenges the researchers' assumptions, and provides them with an opportunity to re-analyze their data. The use of constant comparison means that one piece of data (for example, an interview) is compared with previous data and not considered on its own, enabling researchers to treat the data as a whole rather than fragmenting it. Constant comparison also enables the researcher to identify emerging/unanticipated themes within the research project.

STRENGTHS AND LIMITATIONS OF QUALITATIVE RESEARCH

Qualitative researchers have been criticized for overusing interviews and focus groups at the expense of other methods such as ethnography, observation, documentary analysis, case studies, and conversational analysis. Qualitative research has numerous strengths when properly conducted.

Strengths of Qualitative Research

  • Issues can be examined in detail and in depth.
  • Interviews are not restricted to specific questions and can be guided/redirected by the researcher in real time.
  • The research framework and direction can be quickly revised as new information emerges.
  • The data based on human experience that is obtained is powerful and sometimes more compelling than quantitative data.
  • Subtleties and complexities about the research subjects and/or topic are discovered that are often missed by more positivistic enquiries.
  • Data usually are collected from a few cases or individuals so findings cannot be generalized to a larger population. Findings can however be transferable to another setting.

Limitations of Qualitative Research

  • Research quality is heavily dependent on the individual skills of the researcher and more easily influenced by the researcher's personal biases and idiosyncrasies.
  • Rigor is more difficult to maintain, assess, and demonstrate.
  • The volume of data makes analysis and interpretation time consuming.
  • It is sometimes not as well understood and accepted as quantitative research within the scientific community
  • The researcher's presence during data gathering, which is often unavoidable in qualitative research, can affect the subjects' responses.
  • Issues of anonymity and confidentiality can present problems when presenting findings
  • Findings can be more difficult and time consuming to characterize in a visual way.

PRESENTATION OF QUALITATIVE RESEARCH FINDINGS

The following extracts are examples of how qualitative data might be presented:

Data From an Interview.

The following is an example of how to present and discuss a quote from an interview.

The researcher should select quotes that are poignant and/or most representative of the research findings. Including large portions of an interview in a research paper is not necessary and often tedious for the reader. The setting and speakers should be established in the text at the end of the quote.

The student describes how he had used deep learning in a dispensing module. He was able to draw on learning from a previous module, “I found that while using the e learning programme I was able to apply the knowledge and skills that I had gained in last year's diseases and goals of treatment module.” (interviewee 22, male)

This is an excerpt from an article on curriculum reform that used interviews 5 :

The first question was, “Without the accreditation mandate, how much of this curriculum reform would have been attempted?” According to respondents, accreditation played a significant role in prompting the broad-based curricular change, and their comments revealed a nuanced view. Most indicated that the change would likely have occurred even without the mandate from the accreditation process: “It reflects where the profession wants to be … training a professional who wants to take on more responsibility.” However, they also commented that “if it were not mandated, it could have been a very difficult road.” Or it “would have happened, but much later.” The change would more likely have been incremental, “evolutionary,” or far more limited in its scope. “Accreditation tipped the balance” was the way one person phrased it. “Nobody got serious until the accrediting body said it would no longer accredit programs that did not change.”

Data From Observations

The following example is some data taken from observation of pharmacist patient consultations using the Calgary Cambridge guide. 6 , 7 The data are first presented and a discussion follows:

Pharmacist: We will soon be starting a stop smoking clinic. Patient: Is the interview over now? Pharmacist: No this is part of it. (Laughs) You can't tell me to bog off (sic) yet. (pause) We will be starting a stop smoking service here, Patient: Yes. Pharmacist: with one-to-one and we will be able to help you or try to help you. If you want it. In this example, the pharmacist has picked up from the patient's reaction to the stop smoking clinic that she is not receptive to advice about giving up smoking at this time; in fact she would rather end the consultation. The pharmacist draws on his prior relationship with the patient and makes use of a joke to lighten the tone. He feels his message is important enough to persevere but he presents the information in a succinct and non-pressurised way. His final comment of “If you want it” is important as this makes it clear that he is not putting any pressure on the patient to take up this offer. This extract shows that some patient cues were picked up, and appropriately dealt with, but this was not the case in all examples.

Data From Focus Groups

This excerpt from a study involving 11 focus groups illustrates how findings are presented using representative quotes from focus group participants. 8

Those pharmacists who were initially familiar with CPD endorsed the model for their peers, and suggested it had made a meaningful difference in the way they viewed their own practice. In virtually all focus groups sessions, pharmacists familiar with and supportive of the CPD paradigm had worked in collaborative practice environments such as hospital pharmacy practice. For these pharmacists, the major advantage of CPD was the linking of workplace learning with continuous education. One pharmacist stated, “It's amazing how much I have to learn every day, when I work as a pharmacist. With [the learning portfolio] it helps to show how much learning we all do, every day. It's kind of satisfying to look it over and see how much you accomplish.” Within many of the learning portfolio-sharing sessions, debates emerged regarding the true value of traditional continuing education and its outcome in changing an individual's practice. While participants appreciated the opportunity for social and professional networking inherent in some forms of traditional CE, most eventually conceded that the academic value of most CE programming was limited by the lack of a systematic process for following-up and implementing new learning in the workplace. “Well it's nice to go to these [continuing education] events, but really, I don't know how useful they are. You go, you sit, you listen, but then, well I at least forget.”

The following is an extract from a focus group (conducted by the author) with first-year pharmacy students about community placements. It illustrates how focus groups provide a chance for participants to discuss issues on which they might disagree.

Interviewer: So you are saying that you would prefer health related placements? Student 1: Not exactly so long as I could be developing my communication skill. Student 2: Yes but I still think the more health related the placement is the more I'll gain from it. Student 3: I disagree because other people related skills are useful and you may learn those from taking part in a community project like building a garden. Interviewer: So would you prefer a mixture of health and non health related community placements?

GUIDANCE FOR PUBLISHING QUALITATIVE RESEARCH

Qualitative research is becoming increasingly accepted and published in pharmacy and medical journals. Some journals and publishers have guidelines for presenting qualitative research, for example, the British Medical Journal 9 and Biomedcentral . 10 Medical Education published a useful series of articles on qualitative research. 11 Some of the important issues that should be considered by authors, reviewers and editors when publishing qualitative research are discussed below.

Introduction.

A good introduction provides a brief overview of the manuscript, including the research question and a statement justifying the research question and the reasons for using qualitative research methods. This section also should provide background information, including relevant literature from pharmacy, medicine, and other health professions, as well as literature from the field of education that addresses similar issues. Any specific educational or research terminology used in the manuscript should be defined in the introduction.

The methods section should clearly state and justify why the particular method, for example, face to face semistructured interviews, was chosen. The method should be outlined and illustrated with examples such as the interview questions, focusing exercises, observation criteria, etc. The criteria for selecting the study participants should then be explained and justified. The way in which the participants were recruited and by whom also must be stated. A brief explanation/description should be included of those who were invited to participate but chose not to. It is important to consider “fair dealing,” ie, whether the research design explicitly incorporates a wide range of different perspectives so that the viewpoint of 1 group is never presented as if it represents the sole truth about any situation. The process by which ethical and or research/institutional governance approval was obtained should be described and cited.

The study sample and the research setting should be described. Sampling differs between qualitative and quantitative studies. In quantitative survey studies, it is important to select probability samples so that statistics can be used to provide generalizations to the population from which the sample was drawn. Qualitative research necessitates having a small sample because of the detailed and intensive work required for the study. So sample sizes are not calculated using mathematical rules and probability statistics are not applied. Instead qualitative researchers should describe their sample in terms of characteristics and relevance to the wider population. Purposive sampling is common in qualitative research. Particular individuals are chosen with characteristics relevant to the study who are thought will be most informative. Purposive sampling also may be used to produce maximum variation within a sample. Participants being chosen based for example, on year of study, gender, place of work, etc. Representative samples also may be used, for example, 20 students from each of 6 schools of pharmacy. Convenience samples involve the researcher choosing those who are either most accessible or most willing to take part. This may be fine for exploratory studies; however, this form of sampling may be biased and unrepresentative of the population in question. Theoretical sampling uses insights gained from previous research to inform sample selection for a new study. The method for gaining informed consent from the participants should be described, as well as how anonymity and confidentiality of subjects were guaranteed. The method of recording, eg, audio or video recording, should be noted, along with procedures used for transcribing the data.

Data Analysis.

A description of how the data were analyzed also should be included. Was computer-aided qualitative data analysis software such as NVivo (QSR International, Cambridge, MA) used? Arrival at “data saturation” or the end of data collection should then be described and justified. A good rule when considering how much information to include is that readers should have been given enough information to be able to carry out similar research themselves.

One of the strengths of qualitative research is the recognition that data must always be understood in relation to the context of their production. 1 The analytical approach taken should be described in detail and theoretically justified in light of the research question. If the analysis was repeated by more than 1 researcher to ensure reliability or trustworthiness, this should be stated and methods of resolving any disagreements clearly described. Some researchers ask participants to check the data. If this was done, it should be fully discussed in the paper.

An adequate account of how the findings were produced should be included A description of how the themes and concepts were derived from the data also should be included. Was an inductive or deductive process used? The analysis should not be limited to just those issues that the researcher thinks are important, anticipated themes, but also consider issues that participants raised, ie, emergent themes. Qualitative researchers must be open regarding the data analysis and provide evidence of their thinking, for example, were alternative explanations for the data considered and dismissed, and if so, why were they dismissed? It also is important to present outlying or negative/deviant cases that did not fit with the central interpretation.

The interpretation should usually be grounded in interviewees or respondents' contributions and may be semi-quantified, if this is possible or appropriate, for example, “Half of the respondents said …” “The majority said …” “Three said…” Readers should be presented with data that enable them to “see what the researcher is talking about.” 1 Sufficient data should be presented to allow the reader to clearly see the relationship between the data and the interpretation of the data. Qualitative data conventionally are presented by using illustrative quotes. Quotes are “raw data” and should be compiled and analyzed, not just listed. There should be an explanation of how the quotes were chosen and how they are labeled. For example, have pseudonyms been given to each respondent or are the respondents identified using codes, and if so, how? It is important for the reader to be able to see that a range of participants have contributed to the data and that not all the quotes are drawn from 1 or 2 individuals. There is a tendency for authors to overuse quotes and for papers to be dominated by a series of long quotes with little analysis or discussion. This should be avoided.

Participants do not always state the truth and may say what they think the interviewer wishes to hear. A good qualitative researcher should not only examine what people say but also consider how they structured their responses and how they talked about the subject being discussed, for example, the person's emotions, tone, nonverbal communication, etc. If the research was triangulated with other qualitative or quantitative data, this should be discussed.

Discussion.

The findings should be presented in the context of any similar previous research and or theories. A discussion of the existing literature and how this present research contributes to the area should be included. A consideration must also be made about how transferrable the research would be to other settings. Any particular strengths and limitations of the research also should be discussed. It is common practice to include some discussion within the results section of qualitative research and follow with a concluding discussion.

The author also should reflect on their own influence on the data, including a consideration of how the researcher(s) may have introduced bias to the results. The researcher should critically examine their own influence on the design and development of the research, as well as on data collection and interpretation of the data, eg, were they an experienced teacher who researched teaching methods? If so, they should discuss how this might have influenced their interpretation of the results.

Conclusion.

The conclusion should summarize the main findings from the study and emphasize what the study adds to knowledge in the area being studied. Mays and Pope suggest the researcher ask the following 3 questions to determine whether the conclusions of a qualitative study are valid 12 : How well does this analysis explain why people behave in the way they do? How comprehensible would this explanation be to a thoughtful participant in the setting? How well does the explanation cohere with what we already know?

CHECKLIST FOR QUALITATIVE PAPERS

This paper establishes criteria for judging the quality of qualitative research. It provides guidance for authors and reviewers to prepare and review qualitative research papers for the American Journal of Pharmaceutical Education . A checklist is provided in Appendix 1 to assist both authors and reviewers of qualitative data.

ACKNOWLEDGEMENTS

Thank you to the 3 reviewers whose ideas helped me to shape this paper.

Appendix 1. Checklist for authors and reviewers of qualitative research.

Introduction

  • □ Research question is clearly stated.
  • □ Research question is justified and related to the existing knowledge base (empirical research, theory, policy).
  • □ Any specific research or educational terminology used later in manuscript is defined.
  • □ The process by which ethical and or research/institutional governance approval was obtained is described and cited.
  • □ Reason for choosing particular research method is stated.
  • □ Criteria for selecting study participants are explained and justified.
  • □ Recruitment methods are explicitly stated.
  • □ Details of who chose not to participate and why are given.
  • □ Study sample and research setting used are described.
  • □ Method for gaining informed consent from the participants is described.
  • □ Maintenance/Preservation of subject anonymity and confidentiality is described.
  • □ Method of recording data (eg, audio or video recording) and procedures for transcribing data are described.
  • □ Methods are outlined and examples given (eg, interview guide).
  • □ Decision to stop data collection is described and justified.
  • □ Data analysis and verification are described, including by whom they were performed.
  • □ Methods for identifying/extrapolating themes and concepts from the data are discussed.
  • □ Sufficient data are presented to allow a reader to assess whether or not the interpretation is supported by the data.
  • □ Outlying or negative/deviant cases that do not fit with the central interpretation are presented.
  • □ Transferability of research findings to other settings is discussed.
  • □ Findings are presented in the context of any similar previous research and social theories.
  • □ Discussion often is incorporated into the results in qualitative papers.
  • □ A discussion of the existing literature and how this present research contributes to the area is included.
  • □ Any particular strengths and limitations of the research are discussed.
  • □ Reflection of the influence of the researcher(s) on the data, including a consideration of how the researcher(s) may have introduced bias to the results is included.

Conclusions

  • □ The conclusion states the main finings of the study and emphasizes what the study adds to knowledge in the subject area.

Not all data are created equal; some are structured, but most of them are unstructured. Structured and unstructured data are sourced, collected and scaled in different ways and each one resides in a different type of database.

In this article, we will take a deep dive into both types so that you can get the most out of your data.

Structured data—typically categorized as quantitative data—is highly organized and easily decipherable by  machine learning algorithms .  Developed by IBM® in 1974 , structured query language (SQL) is the programming language used to manage structured data. By using a  relational (SQL) database , business users can quickly input, search and manipulate structured data.

Examples of structured data include dates, names, addresses, credit card numbers, among others. Their benefits are tied to ease of use and access, while liabilities revolve around data inflexibility:

  • Easily used by machine learning (ML) algorithms:  The specific and organized architecture of structured data eases the manipulation and querying of ML data.
  • Easily used by business users:  Structured data do not require an in-depth understanding of different types of data and how they function. With a basic understanding of the topic relative to the data, users can easily access and interpret the data.
  • Accessible by more tools:  Since structured data predates unstructured data, there are more tools available for using and analyzing structured data.
  • Limited usage:  Data with a predefined structure can only be used for its intended purpose, which limits its flexibility and usability.
  • Limited storage options:  Structured data are usually stored in data storage systems with rigid schemas (for example, “ data warehouses ”). Therefore, changes in data requirements necessitate an update of all structured data, which leads to a massive expenditure of time and resources.
  • OLAP :  Performs high-speed, multidimensional data analysis from unified, centralized data stores.
  • SQLite : (link resides outside ibm.com)  Implements a self-contained,  serverless , zero-configuration, transactional relational database engine.
  • MySQL :  Embeds data into mass-deployed software, particularly mission-critical, heavy-load production system.
  • PostgreSQL :  Supports SQL and JSON querying as well as high-tier programming languages (C/C+, Java,  Python , among others.).
  • Customer relationship management (CRM):  CRM software runs structured data through analytical tools to create datasets that reveal customer behavior patterns and trends.
  • Online booking:  Hotel and ticket reservation data (for example, dates, prices, destinations, among others.) fits the “rows and columns” format indicative of the pre-defined data model.
  • Accounting:  Accounting firms or departments use structured data to process and record financial transactions.

Unstructured data, typically categorized as qualitative data, cannot be processed and analyzed through conventional data tools and methods. Since unstructured data does not have a predefined data model, it is best managed in  non-relational (NoSQL) databases . Another way to manage unstructured data is to use  data lakes  to preserve it in raw form.

The importance of unstructured data is rapidly increasing.  Recent projections  (link resides outside ibm.com) indicate that unstructured data is over 80% of all enterprise data, while 95% of businesses prioritize unstructured data management.

Examples of unstructured data include text, mobile activity, social media posts, Internet of Things (IoT) sensor data, among others. Their benefits involve advantages in format, speed and storage, while liabilities revolve around expertise and available resources:

  • Native format:  Unstructured data, stored in its native format, remains undefined until needed. Its adaptability increases file formats in the database, which widens the data pool and enables data scientists to prepare and analyze only the data they need.
  • Fast accumulation rates:  Since there is no need to predefine the data, it can be collected quickly and easily.
  • Data lake storage:  Allows for massive storage and pay-as-you-use pricing, which cuts costs and eases scalability.
  • Requires expertise:  Due to its undefined or non-formatted nature, data science expertise is required to prepare and analyze unstructured data. This is beneficial to data analysts but alienates unspecialized business users who might not fully understand specialized data topics or how to utilize their data.
  • Specialized tools:  Specialized tools are required to manipulate unstructured data, which limits product choices for data managers.
  • MongoDB :  Uses flexible documents to process data for cross-platform applications and services.
  • DynamoDB :  (link resides outside ibm.com) Delivers single-digit millisecond performance at any scale through built-in security, in-memory caching and backup and restore.
  • Hadoop :  Provides distributed processing of large data sets using simple programming models and no formatting requirements.
  • Azure :  Enables agile cloud computing for creating and managing apps through Microsoft’s data centers.
  • Data mining :  Enables businesses to use unstructured data to identify consumer behavior, product sentiment and purchasing patterns to better accommodate their customer base.
  • Predictive data analytics :  Alert businesses of important activity ahead of time so they can properly plan and accordingly adjust to significant market shifts.
  • Chatbots :  Perform text analysis to route customer questions to the appropriate answer sources.

While structured (quantitative) data gives a “birds-eye view” of customers, unstructured (qualitative) data provides a deeper understanding of customer behavior and intent. Let’s explore some of the key areas of difference and their implications:

  • Sources:  Structured data is sourced from GPS sensors, online forms, network logs, web server logs,  OLTP systems , among others; whereas unstructured data sources include email messages, word-processing documents, PDF files, and others.
  • Forms:  Structured data consists of numbers and values, whereas unstructured data consists of sensors, text files, audio and video files, among others.
  • Models:  Structured data has a predefined data model and is formatted to a set data structure before being placed in data storage (for example, schema-on-write), whereas unstructured data is stored in its native format and not processed until it is used (for example, schema-on-read).
  • Storage:  Structured data is stored in tabular formats (for example, excel sheets or SQL databases) that require less storage space. It can be stored in data warehouses, which makes it highly scalable. Unstructured data, on the other hand, is stored as media files or NoSQL databases, which require more space. It can be stored in data lakes, which makes it difficult to scale.
  • Uses:  Structured data is used in machine learning (ML) and drives its algorithms, whereas unstructured data is used in  natural language processing  (NLP) and text mining.

Semi-structured data (for example, JSON, CSV, XML) is the “bridge” between structured and unstructured data. It does not have a predefined data model and is more complex than structured data, yet easier to store than unstructured data.

Semi-structured data uses “metadata” (for example, tags and semantic markers) to identify specific data characteristics and scale data into records and preset fields. Metadata ultimately enables semi-structured data to be better cataloged, searched and analyzed than unstructured data.

  • Example of metadata usage:  An online article displays a headline, a snippet, a featured image, image alt-text, slug, among others, which helps differentiate one piece of web content from similar pieces.
  • Example of semi-structured data vs. structured data:  A tab-delimited file containing customer data versus a database containing CRM tables.
  • Example of semi-structured data vs. unstructured data:  A tab-delimited file versus a list of comments from a customer’s Instagram.

Recent developments in  artificial intelligence  (AI) and machine learning (ML) are driving the future wave of data, which is enhancing business intelligence and advancing industrial innovation. In particular, the data formats and models that are covered in this article are helping business users to do the following:

  • Analyze digital communications for compliance:  Pattern recognition and email threading analysis software that can search email and chat data for potential noncompliance.
  • Track high-volume customer conversations in social media:  Text analytics and sentiment analysis that enables monitoring of marketing campaign results and identifying online threats.
  • Gain new marketing intelligence:  ML analytics tools that can quickly cover massive amounts of data to help businesses analyze customer behavior.

Furthermore, smart and efficient usage of data formats and models can help you with the following:

  • Understand customer needs at a deeper level to better serve them
  • Create more focused and targeted marketing campaigns
  • Track current metrics and create new ones
  • Create better product opportunities and offerings
  • Reduce operational costs

Whether you are a seasoned data expert or a novice business owner, being able to handle all forms of data is conducive to your success. By using structured, semi-structured and unstructured data options, you can perform optimal data management that will ultimately benefit your mission.

Get the latest tech insights and expert thought leadership in your inbox.

To better understand data storage options for whatever kind of data best serves you, check out IBM Cloud Databases

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.

NTRS - NASA Technical Reports Server

Available downloads, related records.

COMMENTS

  1. Understanding Data Presentations (Guide + Examples)

    A proper data presentation includes the interpretation of that data, the reason why it's included, and why it matters to your research. Conclusion & CTA: Ending your presentation with a call to action is necessary. Whether you intend to wow your audience into acquiring your services, inspire them to change the world, or whatever the purpose ...

  2. What Is Data Interpretation? Meaning & Analysis Examples

    2. Brand Analysis Dashboard. Next, in our list of data interpretation examples, we have a template that shows the answers to a survey on awareness for Brand D. The sample size is listed on top to get a perspective of the data, which is represented using interactive charts and graphs. **click to enlarge**.

  3. Data Interpretation

    The purpose of data interpretation is to make sense of complex data by analyzing and drawing insights from it. The process of data interpretation involves identifying patterns and trends, making comparisons, and drawing conclusions based on the data. The ultimate goal of data interpretation is to use the insights gained from the analysis to ...

  4. Data Interpretation: Definition and Steps with Examples

    Data interpretation is the process of reviewing data and arriving at relevant conclusions using various analytical research methods. Data analysis assists researchers in categorizing, manipulating data, and summarizing data to answer critical questions. LEARN ABOUT: Level of Analysis. In business terms, the interpretation of data is the ...

  5. Data Presentation in Research Reports: Key Principles and Tips

    Data presentation is a crucial aspect of any research report, as it communicates the results and implications of your analysis to your audience.

  6. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  7. What is Data Interpretation? Methods, Examples & Tools

    Data interpretation is a crucial aspect of data analysis and enables organizations to turn large amounts of data into actionable insights. The guide covered the definition, importance, types, methods, benefits, process, analysis, tools, use cases, and best practices of data interpretation. As technology continues to advance, the methods and ...

  8. The Library: Research Skills: Analysing and Presenting Data

    Overview. Data analysis is an ongoing process that should occur throughout your research project. Suitable data-analysis methods must be selected when you write your research proposal. The nature of your data (i.e. quantitative or qualitative) will be influenced by your research design and purpose. The data will also influence the analysis ...

  9. 31 Interpretation In Qualitative Research: What, Why, How

    Abstract. This chapter addresses a wide range of concepts related to interpretation in qualitative research, examines the meaning and importance of interpretation in qualitative inquiry, and explores the ways methodology, data, and the self/researcher as instrument interact and impact interpretive processes.

  10. Part 4

    In this section the aim is to discuss quantitative and qualitative analysis and how to present research. You have already been advised to read widely on the method and techniques of your choice, and this is emphasized even more in this section. It is outwith the scope of this text to provide the detail and depth necessary for you to master any ...

  11. Data Collection, Presentation and Analysis

    There are various data collection, presentation and analysis tools/platforms in the fields of CS, IS and CY. For complex research problems at PhD level, the researcher may need to consider the various trade-offs to determine the most appropriate data collection, analysis and presentation approach that meets research needs.

  12. Statistical data presentation

    In this article, the techniques of data and information presentation in textual, tabular, and graphical forms are introduced. Text is the principal method for explaining findings, outlining trends, and providing contextual information. A table is best suited for representing individual information and represents both quantitative and ...

  13. What Is Data Analysis? (With Examples)

    What Is Data Analysis? (With Examples) Data analysis is the practice of working with data to glean useful information, which can then be used to make informed decisions. "It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts," Sherlock Holme's proclaims ...

  14. Data Presentation

    Data Analysis and Data Presentation have a practical implementation in every possible field. It can range from academic studies, commercial, industrial and marketing activities to professional practices. In its raw form, data can be extremely complicated to decipher and in order to extract meaningful insights from the data, data analysis is an important step towards breaking down data into ...

  15. PDF CHAPTER 4: ANALYSIS AND INTERPRETATION OF RESULTS

    To complete this study properly, it is necessary to analyse the data collected in order to test the hypothesis and answer the research questions. As already indicated in the preceding chapter, data is interpreted in a descriptive form. This chapter comprises the analysis, presentation and interpretation of the findings resulting from this study.

  16. Chapter Four Data Presentation, Analysis and Interpretation 4.0

    DATA PRESENTATION, ANALYSIS AND INTERPRETATION. 4.0 Introduction. This chapter is concerned with data pres entation, of the findings obtained through the study. The. findings are presented in ...

  17. How To Present Research Data?

    Data, which often are numbers and figures, are better presented in tables and graphics, while the interpretation are better stated in text. By doing so, we do not need to repeat the values of HbA 1c in the text (which will be illustrated in tables or graphics), and we can interpret the data for the readers. However, if there are too few variables, the data can be easily described in a simple ...

  18. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...

  19. PDF DATA ANALYSIS, INTERPRETATION AND PRESENTATION

    analysis to use on a set of data and the relevant forms of pictorial presentation or data display. The decision is based on the scale of measurement of the data. These scales are nominal, ordinal and numerical. Nominal scale A nominal scale is where: the data can be classified into a non-numerical or named categories, and

  20. Presenting and Evaluating Qualitative Research

    The purpose of this paper is to help authors to think about ways to present qualitative research papers in the American Journal of Pharmaceutical Education. It also discusses methods for reviewers to assess the rigour, quality, and usefulness of qualitative research. Examples of different ways to present data from interviews, observations, and ...

  21. (PDF) Qualitative Data Analysis and Interpretation: Systematic Search

    Qualitative data analysis is. concerned with transforming raw data by searching, evaluating, recogni sing, cod ing, mapping, exploring and describing patterns, trends, themes an d categories in ...

  22. What Is Data Presentation? (Definition, Types And How-To)

    What Is Data Presentation? Data presentation is a process of comparing two or more data sets with visual aids, such as graphs. Using a graph, you can represent how the information relates to other data. This process follows data analysis and helps organise information by visualising and putting it into a more readable format.

  23. (PDF) DATA PRESENTATION AND ANALYSINGf

    Data is the basis of information, reasoning, or calcul ation, it is analysed to obtain. information. Data analysis is a process of inspecting, cleansing, transforming, and data. modeling with the ...

  24. How to Create a Powerful Research Presentation

    Visualize Data Instead of Writing Them. When adding facts and figures to your research presentation, harness the power of data visualization. Add interactive charts and graphs to take out most of the text. Text with visuals causes a faster and stronger reaction than words alone, making your presentation more memorable.

  25. Essential Steps for Data Presentation & Interpretation

    Lesson 2: Data Presentation and Interpretation 3. Techniques in Data Processing Remember to organize your data based on your research questions. The data processing involves three actions: 1. editing, 2.coding, and 3. tabulation. Lesson 2: Data Presentation and Interpretation 4. Editingis a process wherein the collected data are checked.

  26. Report Writing Format with Templates and Sample Report

    5. Research Report. Sometimes if you need to do some in-depth research, the best way to present that information is with a research report. Whether it's scientific findings, data and statistics from a study, etc., a research report is a great way to share your results. For the visuals in your research report, Visme offers millions of free stock ...

  27. Supervised vs. unsupervised learning: What's the difference?

    The main difference between supervised and unsupervised learning: Labeled data. The main distinction between the two approaches is the use of labeled data sets. To put it simply, supervised learning uses labeled input and output data, while an unsupervised learning algorithm does not. In supervised learning, the algorithm "learns" from the ...

  28. Structured vs. unstructured data: What's the difference?

    Structured data—typically categorized as quantitative data—is highly organized and easily decipherable by machine learning algorithms. Developed by IBM® in 1974, structured query language (SQL) is the programming language used to manage structured data.By using a relational (SQL) database, business users can quickly input, search and manipulate structured data.

  29. How to Use the New ChatGPT Data Analysis

    The new data analysis features are available to ChatGPT Plus and Teams users, providing them with advanced tools to leverage the power of AI in their data-driven projects. Additionally, ChatGPT ...

  30. Acquisition of and Access to Research Omics Data

    Omics data are essential for understanding the myriad and complex effects of space environments on humans. To assure maximum benefit from these kinds of data, the NASA Human Research Program Data Management Plan stipulates that human omics data should be archived within and accessed through the NASA Life Sciences Portal (NLSP). The NLSP has the capability to acquire and provision access to ...