1Library

  • No results found

PERCENTAGE ANALYSIS

DATA ANALYSIS & INTERPRETATION

100 - percent) for better understanding of collected data. The collected data is also represented in table format.

 To motivate the employees towards the change implemented.

 To study the change management strategies adopted by the management.

 Increased job satisfaction.

 Increased productivity and creativity.

DATA ANALYSIS & INTERPRETATION

Data analysis is considered to be important step and heart of the research in research work.

After collection of data with the help of relevant tools and techniques, the next logical step, is to analyze and interpret data with a view to arriving at empirical solution to the problem. The data analysis for the present research was done quantitatively with the help of both descriptive statistics and inferential statistics.

STATISTICAL TOOLS USED

 Percentage Analysis.

 Net Weighted Average Analysis.

CAMS Journal of Business Studies and Research ISSN : 0975-7953 July-September 68 It refers to a special kind of ratio. Percentage is used in making comparison between two or more series of data; percentages are used to determine relationship between the series if data finding the relative differences becomes easier through percentage.

It is expressed as,

Percentage (%) = No. of respondents x 100 Total no. of respondents

4.1.1. Classification based on gender

Table 4.1.1 Classification based on gender

Gender Number Percentage

Male 15 37.50

Female 25 62.50

Total 70 100

(Source: Primary Data) Interpretation:

From the above table it is inferred that 37.50% of the respondents are Male and 62.50% are Female.

4.1.2. Classification based on Designation

Table 4.1.2 Classification based on Designation

Department Number Percentage

HR Executive 3 7.50

Marketing Executive 20 50

Manager 5 12.50

Accountant 12 30

Total 40 100

From the above table it is inferred that from the 40 samples taken, 7.50% of the employees are HR Executives, 50% of the employees are Marketing Executives, 12.50% of the employees are Managers and 30% of the employees are Accountants.

4.1.3. Are you satisfied with the existing system of organization?

Table 4.1.3 Classification based on the satisfaction of the existing system of organization

Level of Satisfaction Number Percentage

Highly Satisfied 25 62.50

Satisfied 10 25

Neutral 0 0

CAMS Journal of Business Studies and Research ISSN : 0975-7953 July-September

From the above table it is inferred that from the 40 samples taken, 87.50% of the employees are satisfied with the existing system followed in the organisation and 12.50% of the employees are not satisfied with existing system followed in the organisation.

4.1.4. Do you agree with the changes brought in the existing system?

Table 4.1.4 Classification based on the level of satisfaction of the changes brought in the existing system

Strongly Agree 10 25

From the above table it is inferred that from the 40 samples taken, 62.50% of the employees agree with the changes to be brought in the organisation, 32.50% of the employees do not agree and 5% of the employees do not have any opinion regarding the changes to be brought in the organization.

4.1.5. Do you accept the new ERP system?

Table 4.1.5 Classification based on the level of acceptance of the new ERP system

Strongly Agree 18 45 2.50% of the employees do not have any opinion.

4.1.6. Do you think the new system will help in career growth?

CAMS Journal of Business Studies and Research ISSN : 0975-7953 July-September 70 Table 4.1.6 Cllassification based on the level of satisfaction that the new system will help

in career growth

Strongly Agree 5 12.50

Agree 25 62.50

Neutral 5 12.50

Disagree 4 10

Strongly Disagree 1 2.50

From the above table it is inferred that from the 40 samples taken, 75% of the employees thinks that the new system will help in career growth, 12.50% of the employees are of the opinion that the new system does not help in career growth and 10% of the employees do not have any opinion.

4.1.7. Will you gladly accept the changes in job profile due to implementation of ERP?

Table 4.1.7 Classification based on the level of satisfaction of the changes in job profile due to implementation of ERP

Strongly Agree 8 20

Agree 10 25

Disagree 15 37.50

Strongly Disagree 2 5

From the above table it is inferred that from the 40 samples taken, 45% of the employees accept the change in job profile due to ERP implementation, 42.50% of the employees do not accept the change and 12.50% of the employees have no opinion.

4.1.8. Do you require training to cope up with the changes being implemented?

Table 4.1.8 Classification based on the level of satisfaction of requirement of training to cope up with the changes being implemented

Strongly Agree 15 37.50

Agree 17 42.50

CAMS Journal of Business Studies and Research ISSN : 0975-7953 July-September 71

Disagree 3 7.50

Strongly Disagree 0 0

From the above table it is inferred that from the 40 samples taken, 80% of the employees are of the opinion that they need training to cope up with changes being implemented, 7.50% of the employees do not need training and 12.50% of the employees have no opinion.

4.1.9. Do you think the new system goes hand in hand with the company’s policy?

Table 4.1.9 Classification based on level satisfaction of that the new system goes hand in hand with the company’s policy

Agree 15 37.50

Disagree 5 12.50

From the above table it is inferred that from the 40 samples taken, 75% of the employees are of the opinion that the existing system will go hand in hand with the company’s policies, 12.50% of the employees think that the existing system will not go hand in hand with the company’s policies.

4.1.10. Do you think the new system will help to reduce the existing work load?

Table 4.1.10 Classification based on level of satisfaction that the new system will help to reduce the existing work load

Strongly Agree 25 62.50

CAMS Journal of Business Studies and Research ISSN : 0975-7953 July-September 72 From the above table it is inferred that from the 40 samples taken, 87.50% of the employees thinks the new system will reduce the work load, 13..50% of the employees thinks the new system will not reduce the work load.

Fig 4.1.10 Showing the level of satisfaction of employees

4.1.11. Do you think the new system will increase productivity?

Table 4.1.11 Classification based on the level of satisfaction that the new system will increase productivity

Strongly Agree 22 55

(Source: Primary Data)

Interpretation:

From the above table it is inferred that from the 40 samples taken, 92.50% of the employees thinks the new system will increase the productivity of the organisation, 7.50% of the employees thinks the new system will not increase the productivity.

4.1.12. Do you agree the new system will enhance the sales or turnover of the company?

0 10 20 30 40 50 60 70 80 90 100

Strongly Agree Agree Neutral Disagree Strongly Disagree

CAMS Journal of Business Studies and Research ISSN : 0975-7953 July-September 73 Table 4.1.12 Classification based on the level of satisfaction that agree the new system

will enhance the sales or turnover of the company

Strongly Agree 10 25 employees thinks the new system will not increase the sales and turnover of the organisation and 20% of the employee have no opinion.

Fig 4.1.12 Shows the level of satisfaction of employees

4.1.13. Do you think the new system will improve the organizational climate and culture?

Table 4.1.13 Classification based on the level of satisfaction that think the new system will improve the organizational climate and culture

CAMS Journal of Business Studies and Research ISSN : 0975-7953 July-September 74

From the above table it is inferred that from the 40 samples taken, 55% of the employees thinks the new system will improve the organisational climate and culture of the organisation, 25% of the employees thinks the new system will not improve the organisational climate and culture of the organisation and 20% of the employee have no opinion.

4.1.14. Do you think the new system will provide you job security?

Table 4.1.14 Classification based on the level of satisfaction that new system will provide you job security

Agree 12 30

Neutral 15 37.50

Strongly Disagree 3 7.5

From the above table it is inferred that from the 40 samples taken, 42.50% of the employees thinks the new system will provide the job security, 12.50% of the employees thinks the new system will not provide them job security and 37.50% of the employee have no opinion.

4.1.15. Do you think the ERP system will enhance the competency of the organization?

Table 4.1.15 Classification based on the level satisfaction that the ERP system will enhance the competency of the organization

Strongly Agree 2 5

Disagree 8 20

CAMS Journal of Business Studies and Research ISSN : 0975-7953 July-September 75 From the above table it is inferred that from the 40 samples taken, 42.50% of the employees thinks the ERP system will enhance the competency of the organization, 20% of the employees thinks the new system will not enhance the competency of the organization and 37.50% of the employee have no opinion.

4.1.16. Do you think the ERP system will help in improve inventory management and material delivery?

Table 4.1.16 Classification based on the level of satisfaction that the ERP system will help in improve inventory management and material delivery

Neutral 9 22.50

From the above table it is inferred that from the 40 samples taken, 55% of the employees thinks the ERP system will improve inventory management and material delivery, 22.50% of the employees thinks the new system will not improve inventory management and material delivery and 20% of the employee have no opinion.

4.1.17. Do you think educating and communicating the employees are necessary before introducing new strategies?

Table 4.1.17 Classification based on the level of satisfaction that educating and communicating the employees are necessary before introducing new strategies

Agree 20 50

From the above table it is inferred that from the 40 samples taken, 62.50% of the employees thinks educating and communicating the employees are necessary before introducing new strategies is required, 25% of the employees thinks educating and communicating the

CAMS Journal of Business Studies and Research ISSN : 0975-7953 July-September 76 employees are necessary before introducing new strategies is not required and 12.50% of the employee have no opinion.

On the basis of the data analysis it is find out that 1. The majority of the employees are females

2. Majority of the employees haven’t done graduation

3. Most of them are highly satisfied with the existing system followed in the organisation 4. Many of the employees are agree with the changes to be brought in the organisation

5. Most of the employees accept the ERP system but there is reluctance from some of the employees

6. Many of the employees have the opinion that new system will help in theircareer growth 7. Majority of the employees accept the change in job profile due to ERP implementation 8. Most of the employees have agreed that training is needed

9. Most of the employees agree that the existing system goes hand in hand with the company’s policies

10. Employees agree that new system will help to reduce the existing workload

11. Many of the respondents have the opinion that the new system will help in increasing the productivity of the organisation

12. Many of the employees have the opinion that the new system will enhance the sales or turnover of the company

13. It is find out that the new system will help in improving the organisational climate and culture

14. Employees agrees that the new system will provide job security

15. Most of the employees agree that ERP system will enhance the competency of an organisation

16. Majority of the employees agrees that ERP system helps in improve inventory management and material delivery

17. Many of the employees have the opinion that educating and communicating the employees before introducing strategies is necessary

The following suggestions cam be derived from the above findings 1. The reluctance of employees has to be removed

2. The employees have to be give a clear idea about the changes to be brought in the organisation

3. The opinion of the employees has to be taken in management process 4. Training the employees in order to cope up with the changes in inevitable 5. Employees have to be included in the change management team

6. The ERP system should be tested in parallel to the existing accounting or billing software before implementing it completely

7. The change in job position must not affect the efficiency of the organisation

CAMS Journal of Business Studies and Research ISSN : 0975-7953 July-September 77 8. There should be a proper reporting route for the employees

9. The grievance handling system should be more efficient

10. There should be a good appraisal system to help the career growth

From the study conducted, it is clear that the company is doing a good job in managing the changes brought in the organisation.

As there is a tight competition going on in the industry, the company should keep its strategies and policies in an orderly manner so that the employees are always motivated, and by this the company can assure employees have a high morale towards the company. Company should watch on its strategic alliance or it should improve its policies in order to compete with its competitors. A good stand in the market will ensure a higher motivation and morale towards the company from the employees side.

BIBLIOGRAPHY

  • Data Analysis
  • REVIEW OF LITERATURE
  • PERCENTAGE ANALYSIS (You are here)

Related documents

The Power of Percentages: An In-Depth Exploration of Percentages and their Applications

percentage analysis in research meaning

Table of Contents

Percentages are everywhere in our daily lives, from calculating taxes to understanding probabilities. In fact, we cannot go a day without encountering a percentage in some form. Despite this, many people still struggle to understand the concept of percentages. This article seeks to provide a comprehensive overview of percentages and their applications, with a special focus on their significance in scientific research and the world of business.

First, we will define percentages and their prevalence in our daily lives. Then, we will take a brief look at the history of percentages and their origin. Following this, we will examine the use of percentages in the world of business and explain their relevance. Lastly, we will discuss the significant role of percentages in scientific research.

Through the course of this article, we aim to help the reader gain a better understanding of percentages and their applications. Whether it is calculating discounts or analyzing scientific data, percentages play a crucial role in our lives. We hope this comprehensive overview will enable the reader to comprehend the power of percentages and their impact on our world.

Understanding Percentages: The Basics

Percentages are an essential part of modern life, and their importance cannot be overstated. Simply put, a percentage is a portion of a whole, expressed as a fraction of 100. For example, a percentage can be used to express a portion of a group, such as the percentage of women in a particular profession or the percentage of homeowners in a given area.

To calculate a percentage, you divide the number in question by the total and then multiply the result by 100. For instance, if you want to know what percentage of a group is female, you would divide the number of women by the total number of people and then multiply the result by 100.

Percentages are used in countless ways in daily life. For example, they are commonly used to express grades, such as scoring a 95% on an exam. Percentages are also used in finance and economics to express interest rates and changes in the stock market.

A percentage can also be used to show a change over time, which is known as percentage change. It is calculated by dividing the difference between the two values by the original value and then multiplying the result by 100. This is an important calculation in business and finance, as it is often used to track growth or decline in a particular metric, such as sales or revenue.

It is also essential to distinguish between percentage increase and decrease. A percentage increase represents the amount by which something has grown, whereas a percentage decrease shows how much it has declined.

Understanding the basics of percentages is a crucial foundation for their application in various fields, and it is essential to master this skill for success in specific industries.

The Versatility of Percentages: Applications in Daily Life

Percentages are ubiquitous in our daily lives, with applications in finance, economics, sports, and health. Here are a few ways that percentages are used in everyday scenarios:

  • Finance and economics: Percentages play a crucial role in the world of business and finance. Bank loans often have a percentage as an interest rate, and credit card companies charge a percentage as interest on unpaid balances. Understanding how percentages work is vital when it comes to making informed financial decisions.
  • Sales, discounts, and deals: When shopping, we often come across deals and discounts that are presented as percentages. Retailers use percentages as a way to attract customers with the promise of discounts. It is important to understand how percentages work, as a deal that may seem good could end up costing more money in the long run.
  • Sports statistics: Percentages are regularly used in sports to analyze player performance, such as batting averages in baseball or shooting percentages in basketball. Percentages give a more accurate representation of a player's skill compared to raw numbers, as they take into account the number of attempts made.
  • Usage of percentages in health: Percentages play a significant role in the healthcare industry. Weight loss programs often use percentages to track progress, such as the percentage of body fat lost or the percentage of pounds lost. Percentages are also used in medication dosages, as the amount given is typically a percentage of body weight.

The Power of Percentages in Data Analysis and Statistics

Percentages play a vital role in data analysis and statistics. They provide a quick and easy way to present and compare data, which makes them ideal for understanding trends and patterns. Here, we will explore how percentages are used in data analysis and statistics.

Calculation of Percentages in Charts, Graphs, and Tables

Graphs, charts, and tables are popular ways of presenting data. Percentages are often used to compare data in these formats. They allow the data to be presented in a meaningful and straightforward way, making it easier for people to understand.

For example, a line graph can show the percentage of people who own a car in a particular state over a period. A table can show the percentage of people who prefer different ice cream flavors. A pie chart can show the percentage of sales for different products.

Use of Percentages in Polling and Survey Results

Polls and surveys are essential ways of gathering data and opinions. Percentages are used in polls and surveys to present the results in an understandable way. For example, a poll could have a question, "Who do you think will win the election?" The answers to this question can be presented as a percentage.

Suppose a survey question is "How many people prefer to work from home?" The results can be presented in the form of a percentage, giving a clear indicator of the preference of the population .

Advantages and Limitations of Using Percentages in Data Interpretation

Percentages have numerous advantages, including an easy way of communicating results, making comparisons, and understanding trends. However, there are some limitations to their use. For instance, percentages can be distorted when used in small sample sizes or when measuring subjective data such as feelings or opinions.

It is crucial to understand the limitations of percentages when interpreting data. This understanding can help us prevent biased and flawed conclusions from being drawn. At the same time, it can help us see trends and changes in data accurately.

Understanding Percentages in Business and Marketing

Percentages play a crucial role in business and marketing analytics. They are utilized to analyze sales and plan future marketing strategies.

Use of Percentages in Business and Marketing Analytics Businesses use percentages to analyze data for accurate decision-making. Percentages help in identifying the trends, patterns, and gaps in the market. By using percentages, companies can understand consumer behavior and preferences. They can create informed business strategies targeted at specific demographics.

Importance of Percentages in Sales Projections and Forecasting Percentages are vital in forecasting sales. Companies use sales forecasts to estimate future sales accurately. Accurate sales projections help a business maintain inventory levels, avoid surplus and waste, and make better financial decisions.

Impact of Percentages on Decision-Making Processes Percentages have a significant impact on decision-making processes in businesses. Companies make data-driven decisions based on the results obtained from percentages. Percentages can influence decisions regarding product improvements, market expansions, and pricing strategies.

Discussion on Ethical Use of Percentages in Advertising Percentages are often utilized in advertising to convince potential customers to purchase products or services. However, the ethical use of percentages can be debatable in some instances, such as when it comes to using false or misleading percentages in advertisements. It is crucial to ensure that the percentages used in advertising are accurate and not manipulated to deceive customers.

Conclusion: Harnessing the Power of Percentages

In conclusion, percentages are a fundamental concept in our daily lives, influencing nearly every aspect of our existence. From simple financial transactions to complex scientific discoveries, percentages play an essential role in shaping our understanding of the world around us.

Throughout this article, we have explored the history, calculation, and function of percentages, examining examples of how percentages are applicable in various scenarios. We discussed the advantages and limitations of using percentages in data interpretation, the ethical use of percentages in advertising, as well as innovations in percentage calculation and analysis.

The future of percentages looks promising, with the potential for presenting and communicating percentages through technology. As we continue to advance in our understanding of percentages, we must keep in mind the significance of their impact on society. Proper utilization of percentages can contribute positively to various industries and enable us to make more informed decisions as a society.

1. What is a percentage and why is it important?

A percentage is a way of expressing a fraction where the denominator is 100. It is prevalent in our daily lives, from calculating discounts in shopping to understanding scientific research. Percentages help us understand proportions and changes in values.

2. What is the difference between percentage increase and decrease?

Percentage increase refers to the amount that a value has grown in comparison to its initial value, while percentage decrease refers to the amount that a value has shrunk in comparison to its initial value. They are calculated using the percentage change formula.

3. How are percentages used in data analysis?

Percentages can provide insights into trends and patterns in data analysis. They are used to calculate proportions and rates, and to compare different groups or categories in a dataset. Percentages are often presented in charts, graphs, and tables to make the data more accessible and informative.

4. How are percentages used in marketing and advertising?

Percentages are important in marketing analytics as they help to measure the success of marketing campaigns and predict future sales. They are used to calculate conversion rates, click-through rates, and customer acquisition rates. However, it is important to use percentages ethically and not mislead customers with false advertising claims.

5. What is the future outlook for percentages?

The future of percentages is likely to involve innovation in calculation and analysis methods, as well as the use of technology to improve visualization and communication of percentage data. With the increasing importance of data in decision-making processes, percentages will continue to play a crucial role in understanding and interpreting numerical information.

  • June 22, 2023

Read the Latest

The impact of stevia and other non-sugar sweeteners on appetite and health, semaglutide: a multi-faceted approach to heart failure management, simplifying prostate cancer screening: how often should you get tested, is acetaminophen safe during pregnancy new study reveals surprising findings, breakthrough blood test shows promise in early detection of pancreatic cancers, public health concerns arise as petrochemical plant endangers east houston community, share this article, leave a comment cancel reply.

You must be logged in to post a comment.

Copyright © 2023 DataMax.org

2.5: Percentage Frequency Distribution

Chapter 1: understanding statistics, chapter 2: summarizing and visualizing data, chapter 3: measure of central tendency, chapter 4: measures of variation, chapter 5: measures of relative standing, chapter 6: probability distributions, chapter 7: estimates, chapter 8: distributions, chapter 9: hypothesis testing, chapter 10: analysis of variance, chapter 11: correlation and regression, chapter 12: statistics in practice.

The JoVE video player is compatible with HTML5 and Adobe Flash. Older browsers that do not support HTML5 and the H.264 video codec will still use a Flash-based video player. We recommend downloading the newest version of Flash here, but we support all versions 10 and above.

percentage analysis in research meaning

Consider a relative frequency distribution table of hockey players with different heights. This table provides information about the fraction, or proportion, of data values under each class.

If this relative frequency is expressed in terms of percentage, it is called the percentage frequency distribution.

Suppose one is interested in the percentage of players with heights between 152 and 157 centimeters. To find out, multiply the corresponding relative frequency with 100 to get the percentage frequency. This indicates that 5 percent of players fall within the required height range.

Repeat the similar calculation for all the other relative frequencies to obtain the percentage frequencies under each class. Generally, the sum of all the percentage frequencies is equal to 100.

A percentage frequency distribution, in general, is a display of data that indicates the percentage of observations for each data point or grouping of data points. It is a commonly used method for expressing the relative frequency of survey responses and other data. The percentage frequency distributions are often displayed as bar graphs, pie charts, or tables.

The process of making a percentage frequency distribution involves the following few steps: note the total number of observations; count the total number of observations within each data point or grouping of data points; and finally, divide the total observations within each data point or grouping of data points by the total number of observations. However, it is to be noted that whenever the percentage frequencies are used in the relative frequency distribution, it is also sometimes termed as percentage frequency distribution.

Get cutting-edge science videos from J o VE sent straight to your inbox every month.

mktb-description

We use cookies to enhance your experience on our website.

By continuing to use our website or clicking “Continue”, you are agreeing to accept our cookies.

WeChat QR Code - JoVE

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

percentage analysis in research meaning

Home Market Research

Data Analysis in Research: Types & Methods

data-analysis-in-research

Content Index

Why analyze data in research?

Types of data in research, finding patterns in the qualitative data, methods used for data analysis in qualitative research, preparing data for analysis, methods used for data analysis in quantitative research, considerations in research data analysis, what is data analysis in research.

Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. 

Three essential things occur during the data analysis process — the first is data organization . Summarization and categorization together contribute to becoming the second known method used for data reduction. It helps find patterns and themes in the data for easy identification and linking. The third and last way is data analysis – researchers do it in both top-down and bottom-up fashion.

LEARN ABOUT: Research Process Steps

On the other hand, Marshall and Rossman describe data analysis as a messy, ambiguous, and time-consuming but creative and fascinating process through which a mass of collected data is brought to order, structure and meaning.

We can say that “the data analysis and data interpretation is a process representing the application of deductive and inductive logic to the research and data analysis.”

Researchers rely heavily on data as they have a story to tell or research problems to solve. It starts with a question, and data is nothing but an answer to that question. But, what if there is no question to ask? Well! It is possible to explore data even without a problem – we call it ‘Data Mining’, which often reveals some interesting patterns within the data that are worth exploring.

Irrelevant to the type of data researchers explore, their mission and audiences’ vision guide them to find the patterns to shape the story they want to tell. One of the essential things expected from researchers while analyzing data is to stay open and remain unbiased toward unexpected patterns, expressions, and results. Remember, sometimes, data analysis tells the most unforeseen yet exciting stories that were not expected when initiating data analysis. Therefore, rely on the data you have at hand and enjoy the journey of exploratory research. 

Create a Free Account

Every kind of data has a rare quality of describing things after assigning a specific value to it. For analysis, you need to organize these values, processed and presented in a given context, to make it useful. Data can be in different forms; here are the primary data types.

  • Qualitative data: When the data presented has words and descriptions, then we call it qualitative data . Although you can observe this data, it is subjective and harder to analyze data in research, especially for comparison. Example: Quality data represents everything describing taste, experience, texture, or an opinion that is considered quality data. This type of data is usually collected through focus groups, personal qualitative interviews , qualitative observation or using open-ended questions in surveys.
  • Quantitative data: Any data expressed in numbers of numerical figures are called quantitative data . This type of data can be distinguished into categories, grouped, measured, calculated, or ranked. Example: questions such as age, rank, cost, length, weight, scores, etc. everything comes under this type of data. You can present such data in graphical format, charts, or apply statistical analysis methods to this data. The (Outcomes Measurement Systems) OMS questionnaires in surveys are a significant source of collecting numeric data.
  • Categorical data: It is data presented in groups. However, an item included in the categorical data cannot belong to more than one group. Example: A person responding to a survey by telling his living style, marital status, smoking habit, or drinking habit comes under the categorical data. A chi-square test is a standard method used to analyze this data.

Learn More : Examples of Qualitative Data in Education

Data analysis in qualitative research

Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process. Hence it is typically used for exploratory research and data analysis .

Although there are several ways to find patterns in the textual information, a word-based method is the most relied and widely used global technique for research and data analysis. Notably, the data analysis process in qualitative research is manual. Here the researchers usually read the available data and find repetitive or commonly used words. 

For example, while studying data collected from African countries to understand the most pressing issues people face, researchers might find  “food”  and  “hunger” are the most commonly used words and will highlight them for further analysis.

LEARN ABOUT: Level of Analysis

The keyword context is another widely used word-based technique. In this method, the researcher tries to understand the concept by analyzing the context in which the participants use a particular keyword.  

For example , researchers conducting research and data analysis for studying the concept of ‘diabetes’ amongst respondents might analyze the context of when and how the respondent has used or referred to the word ‘diabetes.’

The scrutiny-based technique is also one of the highly recommended  text analysis  methods used to identify a quality data pattern. Compare and contrast is the widely used method under this technique to differentiate how a specific text is similar or different from each other. 

For example: To find out the “importance of resident doctor in a company,” the collected data is divided into people who think it is necessary to hire a resident doctor and those who think it is unnecessary. Compare and contrast is the best method that can be used to analyze the polls having single-answer questions types .

Metaphors can be used to reduce the data pile and find patterns in it so that it becomes easier to connect data with theory.

Variable Partitioning is another technique used to split variables so that researchers can find more coherent descriptions and explanations from the enormous data.

LEARN ABOUT: Qualitative Research Questions and Questionnaires

There are several techniques to analyze the data in qualitative research, but here are some commonly used methods,

  • Content Analysis:  It is widely accepted and the most frequently employed technique for data analysis in research methodology. It can be used to analyze the documented information from text, images, and sometimes from the physical items. It depends on the research questions to predict when and where to use this method.
  • Narrative Analysis: This method is used to analyze content gathered from various sources such as personal interviews, field observation, and  surveys . The majority of times, stories, or opinions shared by people are focused on finding answers to the research questions.
  • Discourse Analysis:  Similar to narrative analysis, discourse analysis is used to analyze the interactions with people. Nevertheless, this particular method considers the social context under which or within which the communication between the researcher and respondent takes place. In addition to that, discourse analysis also focuses on the lifestyle and day-to-day environment while deriving any conclusion.
  • Grounded Theory:  When you want to explain why a particular phenomenon happened, then using grounded theory for analyzing quality data is the best resort. Grounded theory is applied to study data about the host of similar cases occurring in different settings. When researchers are using this method, they might alter explanations or produce new ones until they arrive at some conclusion.

LEARN ABOUT: 12 Best Tools for Researchers

Data analysis in quantitative research

The first stage in research and data analysis is to make it for the analysis so that the nominal data can be converted into something meaningful. Data preparation consists of the below phases.

Phase I: Data Validation

Data validation is done to understand if the collected data sample is per the pre-set standards, or it is a biased data sample again divided into four different stages

  • Fraud: To ensure an actual human being records each response to the survey or the questionnaire
  • Screening: To make sure each participant or respondent is selected or chosen in compliance with the research criteria
  • Procedure: To ensure ethical standards were maintained while collecting the data sample
  • Completeness: To ensure that the respondent has answered all the questions in an online survey. Else, the interviewer had asked all the questions devised in the questionnaire.

Phase II: Data Editing

More often, an extensive research data sample comes loaded with errors. Respondents sometimes fill in some fields incorrectly or sometimes skip them accidentally. Data editing is a process wherein the researchers have to confirm that the provided data is free of such errors. They need to conduct necessary checks and outlier checks to edit the raw edit and make it ready for analysis.

Phase III: Data Coding

Out of all three, this is the most critical phase of data preparation associated with grouping and assigning values to the survey responses . If a survey is completed with a 1000 sample size, the researcher will create an age bracket to distinguish the respondents based on their age. Thus, it becomes easier to analyze small data buckets rather than deal with the massive data pile.

LEARN ABOUT: Steps in Qualitative Research

After the data is prepared for analysis, researchers are open to using different research and data analysis methods to derive meaningful insights. For sure, statistical analysis plans are the most favored to analyze numerical data. In statistical analysis, distinguishing between categorical data and numerical data is essential, as categorical data involves distinct categories or labels, while numerical data consists of measurable quantities. The method is again classified into two groups. First, ‘Descriptive Statistics’ used to describe data. Second, ‘Inferential statistics’ that helps in comparing the data .

Descriptive statistics

This method is used to describe the basic features of versatile types of data in research. It presents the data in such a meaningful way that pattern in the data starts making sense. Nevertheless, the descriptive analysis does not go beyond making conclusions. The conclusions are again based on the hypothesis researchers have formulated so far. Here are a few major types of descriptive analysis methods.

Measures of Frequency

  • Count, Percent, Frequency
  • It is used to denote home often a particular event occurs.
  • Researchers use it when they want to showcase how often a response is given.

Measures of Central Tendency

  • Mean, Median, Mode
  • The method is widely used to demonstrate distribution by various points.
  • Researchers use this method when they want to showcase the most commonly or averagely indicated response.

Measures of Dispersion or Variation

  • Range, Variance, Standard deviation
  • Here the field equals high/low points.
  • Variance standard deviation = difference between the observed score and mean
  • It is used to identify the spread of scores by stating intervals.
  • Researchers use this method to showcase data spread out. It helps them identify the depth until which the data is spread out that it directly affects the mean.

Measures of Position

  • Percentile ranks, Quartile ranks
  • It relies on standardized scores helping researchers to identify the relationship between different scores.
  • It is often used when researchers want to compare scores with the average count.

For quantitative research use of descriptive analysis often give absolute numbers, but the in-depth analysis is never sufficient to demonstrate the rationale behind those numbers. Nevertheless, it is necessary to think of the best method for research and data analysis suiting your survey questionnaire and what story researchers want to tell. For example, the mean is the best way to demonstrate the students’ average scores in schools. It is better to rely on the descriptive statistics when the researchers intend to keep the research or outcome limited to the provided  sample  without generalizing it. For example, when you want to compare average voting done in two different cities, differential statistics are enough.

Descriptive analysis is also called a ‘univariate analysis’ since it is commonly used to analyze a single variable.

Inferential statistics

Inferential statistics are used to make predictions about a larger population after research and data analysis of the representing population’s collected sample. For example, you can ask some odd 100 audiences at a movie theater if they like the movie they are watching. Researchers then use inferential statistics on the collected  sample  to reason that about 80-90% of people like the movie. 

Here are two significant areas of inferential statistics.

  • Estimating parameters: It takes statistics from the sample research data and demonstrates something about the population parameter.
  • Hypothesis test: I t’s about sampling research data to answer the survey research questions. For example, researchers might be interested to understand if the new shade of lipstick recently launched is good or not, or if the multivitamin capsules help children to perform better at games.

These are sophisticated analysis methods used to showcase the relationship between different variables instead of describing a single variable. It is often used when researchers want something beyond absolute numbers to understand the relationship between variables.

Here are some of the commonly used methods for data analysis in research.

  • Correlation: When researchers are not conducting experimental research or quasi-experimental research wherein the researchers are interested to understand the relationship between two or more variables, they opt for correlational research methods.
  • Cross-tabulation: Also called contingency tables,  cross-tabulation  is used to analyze the relationship between multiple variables.  Suppose provided data has age and gender categories presented in rows and columns. A two-dimensional cross-tabulation helps for seamless data analysis and research by showing the number of males and females in each age category.
  • Regression analysis: For understanding the strong relationship between two variables, researchers do not look beyond the primary and commonly used regression analysis method, which is also a type of predictive analysis used. In this method, you have an essential factor called the dependent variable. You also have multiple independent variables in regression analysis. You undertake efforts to find out the impact of independent variables on the dependent variable. The values of both independent and dependent variables are assumed as being ascertained in an error-free random manner.
  • Frequency tables: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Analysis of variance: The statistical procedure is used for testing the degree to which two or more vary or differ in an experiment. A considerable degree of variation means research findings were significant. In many contexts, ANOVA testing and variance analysis are similar.
  • Researchers must have the necessary research skills to analyze and manipulation the data , Getting trained to demonstrate a high standard of research practice. Ideally, researchers must possess more than a basic understanding of the rationale of selecting one statistical method over the other to obtain better data insights.
  • Usually, research and data analytics projects differ by scientific discipline; therefore, getting statistical advice at the beginning of analysis helps design a survey questionnaire, select data collection methods , and choose samples.

LEARN ABOUT: Best Data Collection Tools

  • The primary aim of data research and analysis is to derive ultimate insights that are unbiased. Any mistake in or keeping a biased mind to collect data, selecting an analysis method, or choosing  audience  sample il to draw a biased inference.
  • Irrelevant to the sophistication used in research data and analysis is enough to rectify the poorly defined objective outcome measurements. It does not matter if the design is at fault or intentions are not clear, but lack of clarity might mislead readers, so avoid the practice.
  • The motive behind data analysis in research is to present accurate and reliable data. As far as possible, avoid statistical errors, and find a way to deal with everyday challenges like outliers, missing data, data altering, data mining , or developing graphical representation.

LEARN MORE: Descriptive Research vs Correlational Research The sheer amount of data generated daily is frightening. Especially when data analysis has taken center stage. in 2018. In last year, the total data supply amounted to 2.8 trillion gigabytes. Hence, it is clear that the enterprises willing to survive in the hypercompetitive world must possess an excellent capability to analyze complex research data, derive actionable insights, and adapt to the new market needs.

LEARN ABOUT: Average Order Value

QuestionPro is an online survey platform that empowers organizations in data analysis and research and provides them a medium to collect data by creating appealing surveys.

MORE LIKE THIS

user journey vs user flow

User Journey vs User Flow: Differences and Similarities

Apr 26, 2024

gap analysis tools

Best 7 Gap Analysis Tools to Empower Your Business

Apr 25, 2024

employee survey tools

12 Best Employee Survey Tools for Organizational Excellence

Customer Experience Management Platform

Customer Experience Management Platform: Software & Practices

Apr 24, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

What Is Statistical Analysis?

percentage analysis in research meaning

Statistical analysis is a technique we use to find patterns in data and make inferences about those patterns to describe variability in the results of a data set or an experiment. 

In its simplest form, statistical analysis answers questions about:

  • Quantification — how big/small/tall/wide is it?
  • Variability — growth, increase, decline
  • The confidence level of these variabilities

What Are the 2 Types of Statistical Analysis?

  • Descriptive Statistics:  Descriptive statistical analysis describes the quality of the data by summarizing large data sets into single measures. 
  • Inferential Statistics:  Inferential statistical analysis allows you to draw conclusions from your sample data set and make predictions about a population using statistical tests.

What’s the Purpose of Statistical Analysis?

Using statistical analysis, you can determine trends in the data by calculating your data set’s mean or median. You can also analyze the variation between different data points from the mean to get the standard deviation . Furthermore, to test the validity of your statistical analysis conclusions, you can use hypothesis testing techniques, like P-value, to determine the likelihood that the observed variability could have occurred by chance.

More From Abdishakur Hassan The 7 Best Thematic Map Types for Geospatial Data

Statistical Analysis Methods

There are two major types of statistical data analysis: descriptive and inferential. 

Descriptive Statistical Analysis

Descriptive statistical analysis describes the quality of the data by summarizing large data sets into single measures. 

Within the descriptive analysis branch, there are two main types: measures of central tendency (i.e. mean, median and mode) and measures of dispersion or variation (i.e. variance , standard deviation and range). 

For example, you can calculate the average exam results in a class using central tendency or, in particular, the mean. In that case, you’d sum all student results and divide by the number of tests. You can also calculate the data set’s spread by calculating the variance. To calculate the variance, subtract each exam result in the data set from the mean, square the answer, add everything together and divide by the number of tests.

Inferential Statistics

On the other hand, inferential statistical analysis allows you to draw conclusions from your sample data set and make predictions about a population using statistical tests. 

There are two main types of inferential statistical analysis: hypothesis testing and regression analysis. We use hypothesis testing to test and validate assumptions in order to draw conclusions about a population from the sample data. Popular tests include Z-test, F-Test, ANOVA test and confidence intervals . On the other hand, regression analysis primarily estimates the relationship between a dependent variable and one or more independent variables. There are numerous types of regression analysis but the most popular ones include linear and logistic regression .  

Statistical Analysis Steps  

In the era of big data and data science, there is a rising demand for a more problem-driven approach. As a result, we must approach statistical analysis holistically. We may divide the entire process into five different and significant stages by using the well-known PPDAC model of statistics: Problem, Plan, Data, Analysis and Conclusion.

statistical analysis chart of the statistical cycle. The chart is in the shape of a circle going clockwise starting with one and going up to five. Each number corresponds to a brief description of that step in the PPDAC cylce. The circle is gray with blue number. Step four is orange.

In the first stage, you define the problem you want to tackle and explore questions about the problem. 

Next is the planning phase. You can check whether data is available or if you need to collect data for your problem. You also determine what to measure and how to measure it. 

The third stage involves data collection, understanding the data and checking its quality. 

4. Analysis

Statistical data analysis is the fourth stage. Here you process and explore the data with the help of tables, graphs and other data visualizations.  You also develop and scrutinize your hypothesis in this stage of analysis. 

5. Conclusion

The final step involves interpretations and conclusions from your analysis. It also covers generating new ideas for the next iteration. Thus, statistical analysis is not a one-time event but an iterative process.

Statistical Analysis Uses

Statistical analysis is useful for research and decision making because it allows us to understand the world around us and draw conclusions by testing our assumptions. Statistical analysis is important for various applications, including:

  • Statistical quality control and analysis in product development 
  • Clinical trials
  • Customer satisfaction surveys and customer experience research 
  • Marketing operations management
  • Process improvement and optimization
  • Training needs 

More on Statistical Analysis From Built In Experts Intro to Descriptive Statistics for Machine Learning

Benefits of Statistical Analysis

Here are some of the reasons why statistical analysis is widespread in many applications and why it’s necessary:

Understand Data

Statistical analysis gives you a better understanding of the data and what they mean. These types of analyses provide information that would otherwise be difficult to obtain by merely looking at the numbers without considering their relationship.

Find Causal Relationships

Statistical analysis can help you investigate causation or establish the precise meaning of an experiment, like when you’re looking for a relationship between two variables.

Make Data-Informed Decisions

Businesses are constantly looking to find ways to improve their services and products . Statistical analysis allows you to make data-informed decisions about your business or future actions by helping you identify trends in your data, whether positive or negative. 

Determine Probability

Statistical analysis is an approach to understanding how the probability of certain events affects the outcome of an experiment. It helps scientists and engineers decide how much confidence they can have in the results of their research, how to interpret their data and what questions they can feasibly answer.

You’ve Got Questions. Our Experts Have Answers. Confidence Intervals, Explained!

What Are the Risks of Statistical Analysis?

Statistical analysis can be valuable and effective, but it’s an imperfect approach. Even if the analyst or researcher performs a thorough statistical analysis, there may still be known or unknown problems that can affect the results. Therefore, statistical analysis is not a one-size-fits-all process. If you want to get good results, you need to know what you’re doing. It can take a lot of time to figure out which type of statistical analysis will work best for your situation .

Thus, you should remember that our conclusions drawn from statistical analysis don’t always guarantee correct results. This can be dangerous when making business decisions. In marketing , for example, we may come to the wrong conclusion about a product . Therefore, the conclusions we draw from statistical data analysis are often approximated; testing for all factors affecting an observation is impossible.

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Great Companies Need Great People. That's Where We Come In.

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Statistics Notes: What...

Statistics Notes: What is a percentage difference?

  • Related content
  • Peer review
  • Tim J Cole , professor 1 ,
  • Douglas G Altman , professor 2
  • 1 Population, Policy and Practice Programme, Great Ormond Street Institute of Child Health, University College London, London WC1N 1EH, UK
  • 2 Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford OX3 7LD, UK
  • Correspondence to: T J Cole tim.cole{at}ucl.ac.uk

We use percentages to express differences as a fraction of the whole. Suppose we want to know the percentage difference in mean height of British adults aged 20 years, 177.3 cm for men and 163.6 cm for women, a difference of 13.7 cm. 1 Women are 100×(13.7/177.3)=7.7% shorter than men, whereas men are 100×(13.7/163.6)=8.4% taller than women. There are two percentage differences, depending on which sex is used as the divisor—this can be confusing. The same problem does not arise with the absolute difference—women are 13.7 cm shorter than men, and men are 13.7 cm taller than women.

Often one of the two numbers to be compared is obviously appropriate as the divisor, such as the first of two measurements taken some time apart; the percentage difference is then the percentage change over time. But often neither measurement is an obvious baseline, and neither of the two percentages is satisfactory. What is the percentage difference in height between the two sexes: is it 7.7% or 8.4%? The answer could be neither, either, and both.

When the difference is small the two percentages are very similar. At 11 years of age, girls are on average 0.523% taller than boys, and boys are 0.520% shorter than girls. The two percentage differences diverge as the two numbers become more different. The married couple recorded as being most dissimilar in height 2 were 188 cm and 94 cm tall, a difference of 94 cm. He was 100% taller than her, while she was 50% shorter than him.

This is clearly an extreme example, but it highlights the potential for confusion. A way forward is to define an alternative form of percentage difference with the mean of the two numbers as divisor:

Percentage difference=100×(difference/mean).

So, for the vertically challenged couple, whose mean is (188+94)/2=141 cm, the percentage difference would be 100×(94/141)=66.7%. This lies between the two conventional percentage differences and is unchanged if the two heights are swapped, +66.7% or −66.7%. It is a symmetric percentage difference, which matches the symmetry of the absolute difference in height, +94 cm or −94 cm. Note, though, that its value depends on which form of mean is used—using the geometric or harmonic mean instead of the arithmetic mean would give a different answer.

However, there is a second problem with the conventional percentage difference—it does not add up. Take a preterm infant whose weight increases by 10% each week for two weeks. You might expect her weight to have increased overall by 2×10%=20%, but you would be wrong: the true figure is 21%. And similarly, if her weight were to rise by 10% one week and fall by 10% the next, the two do not cancel out: her weight at the end would be only 99% of her starting weight.

These examples show that, as well as not being symmetric, the percentage difference is not additive . And unlike symmetry, additivity is not always achieved by using the mean as divisor.

Percentage differences arise in other contexts, such as fractional standard deviations and fractional regression coefficients. A separate Statistics Note 3 shows how the concepts are linked and how to calculate a percentage difference that is both symmetric and additive.

  • Freeman JV ,
  • Jones PRM ,
  • ↵ The Guinness Book of Records: 2017 . Guinness World Records, 2016.

percentage analysis in research meaning

The Method of Data Analysis section outlines exactly which statistic will be used to answer each Research Question and/or Research Hypothesis. To complete this section, refer to the Research Questions and Research Hypotheses. For every research question, describe the descriptive statistic that is appropriate for answering the question. For every research hypothesis, describe the inferential statistic that is appropriate for analyzing the hypothesis. For simple statistics (e.g., percentage, mean, t-test), it is possible to also give the formula for the statistic. However, for more advanced statistics such as ANCOVA, the statistic is much too complex to describe the formula. The following general guidelines should help you determine which statistic is appropriate for each research question and hypothesis.

Note that in many research studies, a range of different statistics will be necessary. This means that researchers should examine each research question and hypothesis separately to consider which statistic is appropriate.

Research questions are always answered with a descriptive statistic: generally either percentage or mean. Percentage is appropriate when it is important to know how many of the participants gave a particular answer. Generally, percentage is reported when the responses have discrete categories. This means that the responses fall in different categories, such as female or male, Christian or Muslim, and smoker or non-smoker. Sometimes frequencies are also reported when the data has discrete categories. However, percentages are easier to understand than frequencies because the percentage can be interpreted as follows. Imagine there were exactly 100 cases in the sample. How many cases out of those 100 would fall in that category?

The mean is reported when it is important to understand the typical response of all the participants. Generally, mean is reported when the responses are continuous. This means that the data has numbers that continue from one point to the last point. For example, age is continuous because it can range from 0 to 100 or so. Scores on an exam are also continuous. In these cases, the mean describes the typical score across all participants.

Whenever a research hypothesis uses the word "relationship," it generally means that a correlation will be calculated. The correlation statistic examines the relationship between two continuous variables within the same group of participants. For example, the correlation would quantify the relationship between academic achievement and achievement motivation. The null hypothesis of a correlation is stated as "there is no significant relationship between academic achievement and achievement motivation."

When calculating the correlation, it is important to not just calculate the correlation, but also the significance of the correlation. The p-value determines whether the relationship is significant. If the p-value is greater than 0.05, then the null hypothesis is retained: there is indeed no relationship between the two variables. Since no significant relationship exists between the variables, then no further interpretation is necessary. If the p-value is less than 0.05, then the null hypothesis is rejected, meaning that there is a significant relationship between the two variables. (Read below for more information about interpreting the significance of a p-value.) The correlation (symbolized as r ) then can be interpreted.

The correlation has two dimensions. The direction of the correlation is indicated by the sign of the correlation. If the correlation is positive, that means that as one variable increases, the other variable also increases. The greater the achievement motivation, the greater the academic achievement. However, a negative correlation means that as one variable increases, the other variable decreases. The more time a person spends watching television, the lower their academic achievement

The second dimension of a correlation is its strength . The strength of the correlation is indicated by the absolute value of the number (i.e., the value of the number itself without the positive or negative sign). The closer the absolute value is to 1, the stronger the relationship, while the closer the absolute value is to 0, the weaker the relationship. For example, a correlation of -0.71 and 0.87 are both strong correlations while correlations of -0.18 and 0.09 are both weak correlations

When the term "relationship" is used in a research hypothesis, sometimes a chi-square statistic may be calculated. Chi-square should be used when both of the variables are discrete, meaning that both variables are represented by categories, not numbers. For example, a chi-square would be used to determine if there is a relationship between gender and smoking status. Gender can only be represented as categories (male and female) as well as smoking status (smoker and non-smoker). However, most of the time, chi-square is misused. Some researchers will group participants into categories based on numerical data, such as taking academic achievement and grouping students into "high achievement" and "low achievement" categories based on their numerical scores on an examination. This is not correct. It is much better to keep the original scores on the exam and calculate a correlation, because it keeps the data in its original form. Researchers are more likely to get a significant result when original data is used, instead of grouping participants into artificial categories.

When a research hypothesis looks at the "effect of a treatment" or "difference between groups," then there are three possible statistics that can be used. The specific statistic depends on the research design. First, consider whether the study will administer the instrument once or twice (e.g., pre-post test experimental or quasi-experimental design). If the study will use a pre-post test design, then an Analysis of Covariance (ANCOVA) should be used. If the instrument will only be administered once, then consider how many groups will be used in the study (either treatment/control group or various groups for the causal-comparative design). If there will be only two groups, then a t-test should be used to compare the two groups. If there will be three or more groups, then the Analysis of Variance (ANOVA) should be used. More details for each of the statistics are given below. Also read more about the theory behind p-values to help you understand what this statistic means.

t -test When comparing two groups on one dependent variable, a t-test should be used. For example, use a t-test to compare a treatment group to a control group or to compare males and females.

  • One-way ANOVA: A one-way ANOVA compares multiple groups on the same variable. For example, a one-way ANOVA would be used to compare the achievement motivation of students in JS1, JS2, and JS3.
  • Factorial ANOVA: The factorial ANOVA compares the effect of multiple independent variables on one dependent variable. For example, a 2x3 factorial ANOVA could compare the effects of gender and grade level on achievement motivation. The first independent variable, gender, has two levels (male and female) and the second independent variable, class, has three levels (JS1, JS2, and JS3). This makes the factorial ANOVA a 2x3. Another study might have three treatment groups and three grade levels. Because the independent variables each have three levels, it would be a 3x3 ANOVA.

ANCOVA When using a pre-post test research design, the Analysis of Covariance allows a comparison of post-test scores with pre-test scores factored out. For example, if comparing a treatment and control group on achievement motivation with a pre-post test design, the ANCOVA will compare the treatment and control groups' post-test scores by statistically setting the pre-test scores as being equal.

Any of the statistics used to answer research hypotheses are called inferential statistics (correlation, chi-square, t-test, ANOVA, and ANCOVA). Educational researchers can never sample the entire population. Instead, a sample is chosen to represent the population. However, the researcher still wants to draw conclusions about the entire population even though only a sample actually participated in the study. In other words, the researcher wants to make inferences about the population based on the results from the sample. The purpose of inferential statistics is to determine whether the findings from the sample can generalize to the entire population, or whether the findings were simply the result of chance.

Imagine a room full of socks - socks from the floor to the ceiling, from the back of the room clear to the front door. You want to determine whether there are more white socks than green socks in the room. However, there are too many socks to count, so you decide to take a sample of socks. You count the number of white and green socks in the sample. Then, you would like to draw a conclusion about whether there are more white socks in the entire room based on your sample. The purpose of inferential statistics is to determine whether the colors chosen in the sample likely reflect the entire room or if your results from the sample of socks were due to chance.

What factors will determine whether the sample of socks adequately represents the entire room? First, the size of the sample. If only two socks were picked, they would very likely not represent the entire room. The larger the sample is, the more representative the sample will be of the entire room and the more accurate the conclusions will be for the entire room. This is why when conducting experiments, a larger sample is generally better (although not always). With large samples, the results will more likely reflect the entire population

The second factor that determines whether the sample of socks adequately represents the entire room is the actual size of the difference between white and green socks in the entire room. If there are only two more white socks than green socks in the entire room, then it will be very difficult to find a significant difference between white and green socks in the sample. In other words, because there is only a very small difference between green and white socks in reality, it will be practically impossible to find a significant difference in the sample. On the other hand, if there are thousands more white socks than green socks in the entire room, it should be relatively easy to find a significant difference in the sample. This means than when you are conducting a research study, try to ensure that there really might be a large difference between groups in reality. Otherwise, you will not find significant results. If conducting an experimental design, plan very well to make the treatment very effective. Very effective treatments result in a large changes in the dependent variable and increase the chance of finding a significant difference in the study. This is also why large sample sizes are not always best: if the sample size is too large, the treatment might not be very effective, which will decrease the chance of getting a significant result.

Another way of thinking about significance testing is this: imagine you wanted to determine if there was a difference between males and females in science achievement. To do this, you administer a science achievement test to 50 males and 50 females. Then you calculate the mean (average) science achievement score for the males and the mean (average) science achievement score for the females. It is practically impossible for the mean scores to be exactly identical. In other words, there will always be at least some small difference between the groups. However, this difference may be very small: perhaps the mean score for the males is 50.21 (out of 100) while the mean score for the females is 50.25. Yes, there is a difference between males and females. However, is this difference large enough to be significant, a meaningful difference? The inferential statistic will determine whether this difference is large enough to conclude that yes, the difference is significant and there is a meaningful difference between males and females in science achievement.

For the t-test, ANOVA, and ANCOVA, four statistics are important to report. First, the p-value determines whether the differences between the groups are significant. If the p-value is less than 0.05, then we say that the differences are significant and the null hypothesis can be rejected. For example, if the null hypothesis was that there is no significant difference between males and females on achievement motivation and the p-value is 0.02, then we reject the null hypothesis and say there is a significant difference between males and females in achievement motivation. However, if the p-value is greater than .05, then the statistic is not significant. This means the null hypothesis is retained: indeed, there is no difference between males and females in achievement motivation.

When reporting the p-value, the value of t (for t -test) or F (for ANOVA and ANCOVA) and the number of degrees of freedom must also be included. The mean scores and standard deviation for each of the groups on the dependent variable must also be reported, which helps the reader to interpret which group has the highest average on the dependent variable.

Any of the previously mentioned statistics can be calculated using the VassarStats website for free.

Copyright 2013, Katrina A. Korb, All Rights Reserved

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

Cover of StatPearls

StatPearls [Internet].

Exploratory data analysis: frequencies, descriptive statistics, histograms, and boxplots.

Jacob Shreffler ; Martin R. Huecker .

Affiliations

Last Update: November 3, 2023 .

  • Definition/Introduction

Researchers must utilize exploratory data techniques to present findings to a target audience and create appropriate graphs and figures. Researchers can determine if outliers exist, data are missing, and statistical assumptions will be upheld by understanding data. Additionally, it is essential to comprehend these data when describing them in conclusions of a paper, in a meeting with colleagues invested in the findings, or while reading others’ work.

  • Issues of Concern

This comprehension begins with exploring these data through the outputs discussed in this article. Individuals who do not conduct research must still comprehend new studies, and knowledge of fundamentals in analyzing data and interpretation of histograms and boxplots facilitates the ability to appraise recent publications accurately. Without this familiarity, decisions could be implemented based on inaccurate delivery or interpretation of medical studies.

Frequencies and Descriptive Statistics

Effective presentation of study results, in presentation or manuscript form, typically starts with frequencies and descriptive statistics (ie, mean, medians, standard deviations). One can get a better sense of the variables by examining these data to determine whether a balanced and sufficient research design exists. Frequencies also inform on missing data and give a sense of outliers (will be discussed below).

Luckily, software programs are available to conduct exploratory data analysis. For this chapter, we will be examining the following research question.

RQ: Are there differences in drug life (length of effect) for Drug 23 based on the administration site?

A more precise hypothesis could be: Is drug 23 longer-lasting when administered via site A compared to site B?

To address this research question, exploratory data analysis is conducted. First, it is essential to start with the frequencies of the variables. To keep things simple, only variables of minutes (drug life effect) and administration site (A vs B) are included. See Image. Figure 1 for outputs for frequencies.

Figure 1 shows that the administration site appears to be a balanced design with 50 individuals in each group. The excerpt for minutes frequencies is the bottom portion of Figure 1 and shows how many cases fell into each time frame with the cumulative percent on the right-hand side. In examining Figure 1, one suspiciously low measurement (135) was observed, considering time variables. If a data point seems inaccurate, a researcher should find this case and confirm if this was an entry error. For the sake of this review, the authors state that this was an entry error and should have been entered 535 and not 135. Had the analysis occurred without checking this, the data analysis, results, and conclusions would have been invalid. When finding any entry errors and determining how groups are balanced, potential missing data is explored. If not responsibly evaluated, missing values can nullify results.  

After replacing the incorrect 135 with 535, descriptive statistics, including the mean, median, mode, minimum/maximum scores, and standard deviation were examined. Output for the research example for the variable of minutes can be seen in Figure 2. Observe each variable to ensure that the mean seems reasonable and that the minimum and maximum are within an appropriate range based on medical competence or an available codebook. One assumption common in statistical analyses is a normal distribution. Image . Figure 2 shows that the mode differs from the mean and the median. We have visualization tools such as histograms to examine these scores for normality and outliers before making decisions.

Histograms are useful in assessing normality, as many statistical tests (eg, ANOVA and regression) assume the data have a normal distribution. When data deviate from a normal distribution, it is quantified using skewness and kurtosis. [1]  Skewness occurs when one tail of the curve is longer. If the tail is lengthier on the left side of the curve (more cases on the higher values), this would be negatively skewed, whereas if the tail is longer on the right side, it would be positively skewed. Kurtosis is another facet of normality. Positive kurtosis occurs when the center has many values falling in the middle, whereas negative kurtosis occurs when there are very heavy tails. [2]

Additionally, histograms reveal outliers: data points either entered incorrectly or truly very different from the rest of the sample. When there are outliers, one must determine accuracy based on random chance or the error in the experiment and provide strong justification if the decision is to exclude them. [3]  Outliers require attention to ensure the data analysis accurately reflects the majority of the data and is not influenced by extreme values; cleaning these outliers can result in better quality decision-making in clinical practice. [4]  A common approach to determining if a variable is approximately normally distributed is converting values to z scores and determining if any scores are less than -3 or greater than 3. For a normal distribution, about 99% of scores should lie within three standard deviations of the mean. [5]  Importantly, one should not automatically throw out any values outside of this range but consider it in corroboration with the other factors aforementioned. Outliers are relatively common, so when these are prevalent, one must assess the risks and benefits of exclusion. [6]

Image . Figure 3 provides examples of histograms. In Figure 3A, 2 possible outliers causing kurtosis are observed. If values within 3 standard deviations are used, the result in Figure 3B are observed. This histogram appears much closer to an approximately normal distribution with the kurtosis being treated. Remember, all evidence should be considered before eliminating outliers. When reporting outliers in scientific paper outputs, account for the number of outliers excluded and justify why they were excluded.

Boxplots can examine for outliers, assess the range of data, and show differences among groups. Boxplots provide a visual representation of ranges and medians, illustrating differences amongst groups, and are useful in various outlets, including evidence-based medicine. [7]  Boxplots provide a picture of data distribution when there are numerous values, and all values cannot be displayed (ie, a scatterplot). [8]  Figure 4 illustrates the differences between drug site administration and the length of drug life from the above example.

Image . Figure 4 shows differences with potential clinical impact. Had any outliers existed (data from the histogram were cleaned), they would appear outside the line endpoint. The red boxes represent the middle 50% of scores. The lines within each red box represent the median number of minutes within each administration site. The horizontal lines at the top and bottom of each line connected to the red box represent the 25th and 75th percentiles. In examining the difference boxplots, an overlap in minutes between 2 administration sites were observed: the approximate top 25 percent from site B had the same time noted as the bottom 25 percent at site A. Site B had a median minute amount under 525, whereas administration site A had a length greater than 550. If there were no differences in adverse reactions at site A, analysis of this figure provides evidence that healthcare providers should administer the drug via site A. Researchers could follow by testing a third administration site, site C. Image . Figure 5 shows what would happen if site C led to a longer drug life compared to site A.

Figure 5 displays the same site A data as Figure 4, but something looks different. The significant variance at site C makes site A’s variance appear smaller. In order words, patients who were administered the drug via site C had a larger range of scores. Thus, some patients experience a longer half-life when the drug is administered via site C than the median of site A; however, the broad range (lack of accuracy) and lower median should be the focus. The precision of minutes is much more compacted in site A. Therefore, the median is higher, and the range is more precise. One may conclude that this makes site A a more desirable site.

  • Clinical Significance

Ultimately, by understanding basic exploratory data methods, medical researchers and consumers of research can make quality and data-informed decisions. These data-informed decisions will result in the ability to appraise the clinical significance of research outputs. By overlooking these fundamentals in statistics, critical errors in judgment can occur.

  • Nursing, Allied Health, and Interprofessional Team Interventions

All interprofessional healthcare team members need to be at least familiar with, if not well-versed in, these statistical analyses so they can read and interpret study data and apply the data implications in their everyday practice. This approach allows all practitioners to remain abreast of the latest developments and provides valuable data for evidence-based medicine, ultimately leading to improved patient outcomes.

  • Review Questions
  • Access free multiple choice questions on this topic.
  • Comment on this article.

Exploratory Data Analysis Figure 1 Contributed by Martin Huecker, MD and Jacob Shreffler, PhD

Exploratory Data Analysis Figure 2 Contributed by Martin Huecker, MD and Jacob Shreffler, PhD

Exploratory Data Analysis Figure 3 Contributed by Martin Huecker, MD and Jacob Shreffler, PhD

Exploratory Data Analysis Figure 4 Contributed by Martin Huecker, MD and Jacob Shreffler, PhD

Exploratory Data Analysis Figure 5 Contributed by Martin Huecker, MD and Jacob Shreffler, PhD

Disclosure: Jacob Shreffler declares no relevant financial relationships with ineligible companies.

Disclosure: Martin Huecker declares no relevant financial relationships with ineligible companies.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( http://creativecommons.org/licenses/by-nc-nd/4.0/ ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.

  • Cite this Page Shreffler J, Huecker MR. Exploratory Data Analysis: Frequencies, Descriptive Statistics, Histograms, and Boxplots. [Updated 2023 Nov 3]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2024 Jan-.

In this Page

Bulk download.

  • Bulk download StatPearls data from FTP

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • Contour boxplots: a method for characterizing uncertainty in feature sets from simulation ensembles. [IEEE Trans Vis Comput Graph. 2...] Contour boxplots: a method for characterizing uncertainty in feature sets from simulation ensembles. Whitaker RT, Mirzargar M, Kirby RM. IEEE Trans Vis Comput Graph. 2013 Dec; 19(12):2713-22.
  • Review Univariate Outliers: A Conceptual Overview for the Nurse Researcher. [Can J Nurs Res. 2019] Review Univariate Outliers: A Conceptual Overview for the Nurse Researcher. Mowbray FI, Fox-Wasylyshyn SM, El-Masri MM. Can J Nurs Res. 2019 Mar; 51(1):31-37. Epub 2018 Jul 3.
  • Qualitative Study. [StatPearls. 2024] Qualitative Study. Tenny S, Brannan JM, Brannan GD. StatPearls. 2024 Jan
  • [Descriptive statistics]. [Rev Alerg Mex. 2016] [Descriptive statistics]. Rendón-Macías ME, Villasís-Keever MÁ, Miranda-Novales MG. Rev Alerg Mex. 2016 Oct-Dec; 63(4):397-407.
  • Review Graphics and statistics for cardiology: comparing categorical and continuous variables. [Heart. 2016] Review Graphics and statistics for cardiology: comparing categorical and continuous variables. Rice K, Lumley T. Heart. 2016 Mar; 102(5):349-55. Epub 2016 Jan 27.

Recent Activity

  • Exploratory Data Analysis: Frequencies, Descriptive Statistics, Histograms, and ... Exploratory Data Analysis: Frequencies, Descriptive Statistics, Histograms, and Boxplots - StatPearls

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • Online Degree Explore Bachelor’s & Master’s degrees
  • MasterTrack™ Earn credit towards a Master’s degree
  • University Certificates Advance your career with graduate-level learning
  • Top Courses
  • Join for Free

What Is Statistical Analysis? Definition, Types, and Jobs

Statistical analytics is a high demand career with great benefits. Learn how you can apply your statistical and data science skills to this growing field.

[Featured image] Analysts study sheets of paper containing statistical harts and graphs

Statistical analysis is the process of collecting large volumes of data and then using statistics and other data analysis techniques to identify trends, patterns, and insights. If you're a whiz at data and statistics, statistical analysis could be a great career match for you. The rise of big data, machine learning, and technology in our society has created a high demand for statistical analysts, and it's an exciting time to develop these skills and find a job you love. In this article, you'll learn more about statistical analysis, including its definition, different types of it, how it's done, and jobs that use it. At the end, you'll also explore suggested cost-effective courses than can help you gain greater knowledge of both statistical and data analytics.

Statistical analysis definition

Statistical analysis is the process of collecting and analyzing large volumes of data in order to identify trends and develop valuable insights.

In the professional world, statistical analysts take raw data and find correlations between variables to reveal patterns and trends to relevant stakeholders. Working in a wide range of different fields, statistical analysts are responsible for new scientific discoveries, improving the health of our communities, and guiding business decisions.

Types of statistical analysis

There are two main types of statistical analysis: descriptive and inferential. As a statistical analyst, you'll likely use both types in your daily work to ensure that data is both clearly communicated to others and that it's used effectively to develop actionable insights. At a glance, here's what you need to know about both types of statistical analysis:

Descriptive statistical analysis

Descriptive statistics summarizes the information within a data set without drawing conclusions about its contents. For example, if a business gave you a book of its expenses and you summarized the percentage of money it spent on different categories of items, then you would be performing a form of descriptive statistics.

When performing descriptive statistics, you will often use data visualization to present information in the form of graphs, tables, and charts to clearly convey it to others in an understandable format. Typically, leaders in a company or organization will then use this data to guide their decision making going forward.

Inferential statistical analysis

Inferential statistics takes the results of descriptive statistics one step further by drawing conclusions from the data and then making recommendations. For example, instead of only summarizing the business's expenses, you might go on to recommend in which areas to reduce spending and suggest an alternative budget.

Inferential statistical analysis is often used by businesses to inform company decisions and in scientific research to find new relationships between variables. 

Statistical analyst duties

Statistical analysts focus on making large sets of data understandable to a more general audience. In effect, you'll use your math and data skills to translate big numbers into easily digestible graphs, charts, and summaries for key decision makers within businesses and other organizations. Typical job responsibilities of statistical analysts include:

Extracting and organizing large sets of raw data

Determining which data is relevant and which should be excluded

Developing new data collection strategies

Meeting with clients and professionals to review data analysis plans

Creating data reports and easily understandable representations of the data

Presenting data

Interpreting data results

Creating recommendations for a company or other organizations

Your job responsibilities will differ depending on whether you work for a federal agency, a private company, or another business sector. Many industries need statistical analysts, so exploring your passions and seeing how you can best apply your data skills can be exciting. 

Statistical analysis skills

Because most of your job responsibilities will likely focus on data and statistical analysis, mathematical skills are crucial. High-level math skills can help you fact-check your work and create strategies to analyze the data, even if you use software for many computations. When honing in on your mathematical skills, focusing on statistics—specifically statistics with large data sets—can help set you apart when searching for job opportunities. Competency with computer software and learning new platforms will also help you excel in more advanced positions and put you in high demand.

Data analytics , problem-solving, and critical thinking are vital skills to help you determine the data set’s true meaning and bigger picture. Often, large data sets may not represent what they appear on the surface. To get to the bottom of things, you'll need to think critically about factors that may influence the data set, create an informed analysis plan, and parse out bias to identify insightful trends. 

To excel in the workplace, you'll need to hone your database management skills, keep up to date on statistical methodology, and continually improve your research skills. These skills take time to build, so starting with introductory courses and having patience while you build skills is important.

Common software used in statistical analytics jobs

Statistical analysis often involves computations using big data that is too large to compute by hand. The good news is that many kinds of statistical software have been developed to help analyze data effectively and efficiently. Gaining mastery over this statistical software can make you look attractive to employers and allow you to work on more complex projects. 

Statistical software is beneficial for both descriptive and inferential statistics. You can use it to generate charts and graphs or perform computations to draw conclusions and inferences from the data. While the type of statistical software you will use will depend on your employer, common software used include:

Read more: The 7 Data Analysis Software You Need to Know

Pathways to a career in statistical analytics

Many paths to becoming a statistical analyst exist, but most jobs in this field require a bachelor’s degree. Employers will typically look for a degree in an area that focuses on math, computer science, statistics, or data science to ensure you have the skills needed for the job. If your bachelor’s degree is in another field, gaining experience through entry-level data entry jobs can help get your foot in the door. Many employers look for work experience in related careers such as being a research assistant, data manager, or intern in the field.

Earning a graduate degree in statistical analytics or a related field can also help you stand out on your resume and demonstrate a deep knowledge of the skills needed to perform the job successfully. Generally, employers focus more on making sure you have the mathematical and data analysis skills required to perform complex statistical analytics on its data. After all, you will be helping them to make decisions, so they want to feel confident in your ability to advise them in the right direction.

Read more: Your Guide to a Career as a Statistician—What to Expect

How much do statistical analytics professionals earn? 

Statistical analysts earn well above the national average and enjoy many benefits on the job. There are many careers utilizing statistical analytics, so comparing salaries can help determine if the job benefits align with your expectations.

Median annual salary: $113,990

Job outlook for 2022 to 2032: 23% [ 1 ]

Data scientist

Median annual salary: $103,500

Job outlook for 2022 to 2032: 35% [ 2 ]

Financial risk specialist

Median annual salary: $102,120

Job outlook for 2022 to 2032: 8% [ 3 ]

Investment analyst

Median annual salary: $95,080

Operational research analyst

Median annual salary: $85,720

Job outlook for 2022 to 2032: 23% [ 4 ]

Market research analyst

Median annual salary: $68,230

Job outlook for 2022 to 2032: 13% [ 5 ]

Statistician

Median annual salary: $99,960

Job outlook for 2022 to 2032: 30% [ 6 ]

Read more: How Much Do Statisticians Make? Your 2022 Statistician Salary Guide

Statistical analysis job outlook

Jobs that use statistical analysis have a positive outlook for the foreseeable future.

According to the US Bureau of Labor Statistics (BLS), the number of jobs for mathematicians and statisticians is projected to grow by 30 percent between 2022 and 2032, adding an average of 3,500 new jobs each year throughout the decade [ 6 ].

As we create more ways to collect data worldwide, there will be an increased need for people able to analyze and make sense of the data.

Ready to take the next step in your career?

Statistical analytics could be an excellent career match for those with an affinity for math, data, and problem-solving. Here are some popular courses to consider as you prepare for a career in statistical analysis:

Learn fundamental processes and tools with Google's Data Analytics Professional Certificate . You'll learn how to process and analyze data, use key analysis tools, apply R programming, and create visualizations that can inform key business decisions.

Grow your comfort using R with Duke University's Data Analysis with R Specialization . Statistical analysts commonly use R for testing, modeling, and analysis. Here, you'll learn and practice those processes.

Apply statistical analysis with Rice University's Business Statistics and Analysis Specialization . Contextualize your technical and analytical skills by using them to solve business problems and complete a hands-on Capstone Project to demonstrate your knowledge.

Article sources

US Bureau of Labor Statistics. " Occupational Outlook Handbook: Actuaries , https://www.bls.gov/ooh/math/actuaries.htm." Accessed November 21, 2023.

US Bureau of Labor Statistics. " Occupational Outlook Handbook: Data Scientists , https://www.bls.gov/ooh/math/data-scientists.htm." Accessed Accessed November 21, 2023.

US Bureau of Labor Statistics. " Occupational Outlook Handbook: Financial Analysts , https://www.bls.gov/ooh/business-and-financial/financial-analysts.htm." Accessed Accessed November 21, 2023.

US Bureau of Labor Statistics. " Occupational Outlook Handbook: Operations Research Analysts , https://www.bls.gov/ooh/math/operations-research-analysts.htm." Accessed Accessed November 21, 2023.

US Bureau of Labor Statistics. " Occupational Outlook Handbook: Market Research Analyst , https://www.bls.gov/ooh/business-and-financial/market-research-analysts.htm." Accessed Accessed November 21, 2023.

US Bureau of Labor Statistics. " Occupational Outlook Handbook: Mathematicians and Statisticians , https://www.bls.gov/ooh/math/mathematicians-and-statisticians.htm." Accessed Accessed November 21, 2023.

Keep reading

Coursera staff.

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.

Have a thesis expert improve your writing

Check your thesis for plagiarism in 10 minutes, generate your apa citations for free.

  • Knowledge Base

The Beginner's Guide to Statistical Analysis | 5 Steps & Examples

Statistical analysis means investigating trends, patterns, and relationships using quantitative data . It is an important research tool used by scientists, governments, businesses, and other organisations.

To draw valid conclusions, statistical analysis requires careful planning from the very start of the research process . You need to specify your hypotheses and make decisions about your research design, sample size, and sampling procedure.

After collecting data from your sample, you can organise and summarise the data using descriptive statistics . Then, you can use inferential statistics to formally test hypotheses and make estimates about the population. Finally, you can interpret and generalise your findings.

This article is a practical introduction to statistical analysis for students and researchers. We’ll walk you through the steps using two research examples. The first investigates a potential cause-and-effect relationship, while the second investigates a potential correlation between variables.

Table of contents

Step 1: write your hypotheses and plan your research design, step 2: collect data from a sample, step 3: summarise your data with descriptive statistics, step 4: test hypotheses or make estimates with inferential statistics, step 5: interpret your results, frequently asked questions about statistics.

To collect valid data for statistical analysis, you first need to specify your hypotheses and plan out your research design.

Writing statistical hypotheses

The goal of research is often to investigate a relationship between variables within a population . You start with a prediction, and use statistical analysis to test that prediction.

A statistical hypothesis is a formal way of writing a prediction about a population. Every research prediction is rephrased into null and alternative hypotheses that can be tested using sample data.

While the null hypothesis always predicts no effect or no relationship between variables, the alternative hypothesis states your research prediction of an effect or relationship.

  • Null hypothesis: A 5-minute meditation exercise will have no effect on math test scores in teenagers.
  • Alternative hypothesis: A 5-minute meditation exercise will improve math test scores in teenagers.
  • Null hypothesis: Parental income and GPA have no relationship with each other in college students.
  • Alternative hypothesis: Parental income and GPA are positively correlated in college students.

Planning your research design

A research design is your overall strategy for data collection and analysis. It determines the statistical tests you can use to test your hypothesis later on.

First, decide whether your research will use a descriptive, correlational, or experimental design. Experiments directly influence variables, whereas descriptive and correlational studies only measure variables.

  • In an experimental design , you can assess a cause-and-effect relationship (e.g., the effect of meditation on test scores) using statistical tests of comparison or regression.
  • In a correlational design , you can explore relationships between variables (e.g., parental income and GPA) without any assumption of causality using correlation coefficients and significance tests.
  • In a descriptive design , you can study the characteristics of a population or phenomenon (e.g., the prevalence of anxiety in U.S. college students) using statistical tests to draw inferences from sample data.

Your research design also concerns whether you’ll compare participants at the group level or individual level, or both.

  • In a between-subjects design , you compare the group-level outcomes of participants who have been exposed to different treatments (e.g., those who performed a meditation exercise vs those who didn’t).
  • In a within-subjects design , you compare repeated measures from participants who have participated in all treatments of a study (e.g., scores from before and after performing a meditation exercise).
  • In a mixed (factorial) design , one variable is altered between subjects and another is altered within subjects (e.g., pretest and posttest scores from participants who either did or didn’t do a meditation exercise).
  • Experimental
  • Correlational

First, you’ll take baseline test scores from participants. Then, your participants will undergo a 5-minute meditation exercise. Finally, you’ll record participants’ scores from a second math test.

In this experiment, the independent variable is the 5-minute meditation exercise, and the dependent variable is the math test score from before and after the intervention. Example: Correlational research design In a correlational study, you test whether there is a relationship between parental income and GPA in graduating college students. To collect your data, you will ask participants to fill in a survey and self-report their parents’ incomes and their own GPA.

Measuring variables

When planning a research design, you should operationalise your variables and decide exactly how you will measure them.

For statistical analysis, it’s important to consider the level of measurement of your variables, which tells you what kind of data they contain:

  • Categorical data represents groupings. These may be nominal (e.g., gender) or ordinal (e.g. level of language ability).
  • Quantitative data represents amounts. These may be on an interval scale (e.g. test score) or a ratio scale (e.g. age).

Many variables can be measured at different levels of precision. For example, age data can be quantitative (8 years old) or categorical (young). If a variable is coded numerically (e.g., level of agreement from 1–5), it doesn’t automatically mean that it’s quantitative instead of categorical.

Identifying the measurement level is important for choosing appropriate statistics and hypothesis tests. For example, you can calculate a mean score with quantitative data, but not with categorical data.

In a research study, along with measures of your variables of interest, you’ll often collect data on relevant participant characteristics.

Population vs sample

In most cases, it’s too difficult or expensive to collect data from every member of the population you’re interested in studying. Instead, you’ll collect data from a sample.

Statistical analysis allows you to apply your findings beyond your own sample as long as you use appropriate sampling procedures . You should aim for a sample that is representative of the population.

Sampling for statistical analysis

There are two main approaches to selecting a sample.

  • Probability sampling: every member of the population has a chance of being selected for the study through random selection.
  • Non-probability sampling: some members of the population are more likely than others to be selected for the study because of criteria such as convenience or voluntary self-selection.

In theory, for highly generalisable findings, you should use a probability sampling method. Random selection reduces sampling bias and ensures that data from your sample is actually typical of the population. Parametric tests can be used to make strong statistical inferences when data are collected using probability sampling.

But in practice, it’s rarely possible to gather the ideal sample. While non-probability samples are more likely to be biased, they are much easier to recruit and collect data from. Non-parametric tests are more appropriate for non-probability samples, but they result in weaker inferences about the population.

If you want to use parametric tests for non-probability samples, you have to make the case that:

  • your sample is representative of the population you’re generalising your findings to.
  • your sample lacks systematic bias.

Keep in mind that external validity means that you can only generalise your conclusions to others who share the characteristics of your sample. For instance, results from Western, Educated, Industrialised, Rich and Democratic samples (e.g., college students in the US) aren’t automatically applicable to all non-WEIRD populations.

If you apply parametric tests to data from non-probability samples, be sure to elaborate on the limitations of how far your results can be generalised in your discussion section .

Create an appropriate sampling procedure

Based on the resources available for your research, decide on how you’ll recruit participants.

  • Will you have resources to advertise your study widely, including outside of your university setting?
  • Will you have the means to recruit a diverse sample that represents a broad population?
  • Do you have time to contact and follow up with members of hard-to-reach groups?

Your participants are self-selected by their schools. Although you’re using a non-probability sample, you aim for a diverse and representative sample. Example: Sampling (correlational study) Your main population of interest is male college students in the US. Using social media advertising, you recruit senior-year male college students from a smaller subpopulation: seven universities in the Boston area.

Calculate sufficient sample size

Before recruiting participants, decide on your sample size either by looking at other studies in your field or using statistics. A sample that’s too small may be unrepresentative of the sample, while a sample that’s too large will be more costly than necessary.

There are many sample size calculators online. Different formulas are used depending on whether you have subgroups or how rigorous your study should be (e.g., in clinical research). As a rule of thumb, a minimum of 30 units or more per subgroup is necessary.

To use these calculators, you have to understand and input these key components:

  • Significance level (alpha): the risk of rejecting a true null hypothesis that you are willing to take, usually set at 5%.
  • Statistical power : the probability of your study detecting an effect of a certain size if there is one, usually 80% or higher.
  • Expected effect size : a standardised indication of how large the expected result of your study will be, usually based on other similar studies.
  • Population standard deviation: an estimate of the population parameter based on a previous study or a pilot study of your own.

Once you’ve collected all of your data, you can inspect them and calculate descriptive statistics that summarise them.

Inspect your data

There are various ways to inspect your data, including the following:

  • Organising data from each variable in frequency distribution tables .
  • Displaying data from a key variable in a bar chart to view the distribution of responses.
  • Visualising the relationship between two variables using a scatter plot .

By visualising your data in tables and graphs, you can assess whether your data follow a skewed or normal distribution and whether there are any outliers or missing data.

A normal distribution means that your data are symmetrically distributed around a center where most values lie, with the values tapering off at the tail ends.

Mean, median, mode, and standard deviation in a normal distribution

In contrast, a skewed distribution is asymmetric and has more values on one end than the other. The shape of the distribution is important to keep in mind because only some descriptive statistics should be used with skewed distributions.

Extreme outliers can also produce misleading statistics, so you may need a systematic approach to dealing with these values.

Calculate measures of central tendency

Measures of central tendency describe where most of the values in a data set lie. Three main measures of central tendency are often reported:

  • Mode : the most popular response or value in the data set.
  • Median : the value in the exact middle of the data set when ordered from low to high.
  • Mean : the sum of all values divided by the number of values.

However, depending on the shape of the distribution and level of measurement, only one or two of these measures may be appropriate. For example, many demographic characteristics can only be described using the mode or proportions, while a variable like reaction time may not have a mode at all.

Calculate measures of variability

Measures of variability tell you how spread out the values in a data set are. Four main measures of variability are often reported:

  • Range : the highest value minus the lowest value of the data set.
  • Interquartile range : the range of the middle half of the data set.
  • Standard deviation : the average distance between each value in your data set and the mean.
  • Variance : the square of the standard deviation.

Once again, the shape of the distribution and level of measurement should guide your choice of variability statistics. The interquartile range is the best measure for skewed distributions, while standard deviation and variance provide the best information for normal distributions.

Using your table, you should check whether the units of the descriptive statistics are comparable for pretest and posttest scores. For example, are the variance levels similar across the groups? Are there any extreme values? If there are, you may need to identify and remove extreme outliers in your data set or transform your data before performing a statistical test.

From this table, we can see that the mean score increased after the meditation exercise, and the variances of the two scores are comparable. Next, we can perform a statistical test to find out if this improvement in test scores is statistically significant in the population. Example: Descriptive statistics (correlational study) After collecting data from 653 students, you tabulate descriptive statistics for annual parental income and GPA.

It’s important to check whether you have a broad range of data points. If you don’t, your data may be skewed towards some groups more than others (e.g., high academic achievers), and only limited inferences can be made about a relationship.

A number that describes a sample is called a statistic , while a number describing a population is called a parameter . Using inferential statistics , you can make conclusions about population parameters based on sample statistics.

Researchers often use two main methods (simultaneously) to make inferences in statistics.

  • Estimation: calculating population parameters based on sample statistics.
  • Hypothesis testing: a formal process for testing research predictions about the population using samples.

You can make two types of estimates of population parameters from sample statistics:

  • A point estimate : a value that represents your best guess of the exact parameter.
  • An interval estimate : a range of values that represent your best guess of where the parameter lies.

If your aim is to infer and report population characteristics from sample data, it’s best to use both point and interval estimates in your paper.

You can consider a sample statistic a point estimate for the population parameter when you have a representative sample (e.g., in a wide public opinion poll, the proportion of a sample that supports the current government is taken as the population proportion of government supporters).

There’s always error involved in estimation, so you should also provide a confidence interval as an interval estimate to show the variability around a point estimate.

A confidence interval uses the standard error and the z score from the standard normal distribution to convey where you’d generally expect to find the population parameter most of the time.

Hypothesis testing

Using data from a sample, you can test hypotheses about relationships between variables in the population. Hypothesis testing starts with the assumption that the null hypothesis is true in the population, and you use statistical tests to assess whether the null hypothesis can be rejected or not.

Statistical tests determine where your sample data would lie on an expected distribution of sample data if the null hypothesis were true. These tests give two main outputs:

  • A test statistic tells you how much your data differs from the null hypothesis of the test.
  • A p value tells you the likelihood of obtaining your results if the null hypothesis is actually true in the population.

Statistical tests come in three main varieties:

  • Comparison tests assess group differences in outcomes.
  • Regression tests assess cause-and-effect relationships between variables.
  • Correlation tests assess relationships between variables without assuming causation.

Your choice of statistical test depends on your research questions, research design, sampling method, and data characteristics.

Parametric tests

Parametric tests make powerful inferences about the population based on sample data. But to use them, some assumptions must be met, and only some types of variables can be used. If your data violate these assumptions, you can perform appropriate data transformations or use alternative non-parametric tests instead.

A regression models the extent to which changes in a predictor variable results in changes in outcome variable(s).

  • A simple linear regression includes one predictor variable and one outcome variable.
  • A multiple linear regression includes two or more predictor variables and one outcome variable.

Comparison tests usually compare the means of groups. These may be the means of different groups within a sample (e.g., a treatment and control group), the means of one sample group taken at different times (e.g., pretest and posttest scores), or a sample mean and a population mean.

  • A t test is for exactly 1 or 2 groups when the sample is small (30 or less).
  • A z test is for exactly 1 or 2 groups when the sample is large.
  • An ANOVA is for 3 or more groups.

The z and t tests have subtypes based on the number and types of samples and the hypotheses:

  • If you have only one sample that you want to compare to a population mean, use a one-sample test .
  • If you have paired measurements (within-subjects design), use a dependent (paired) samples test .
  • If you have completely separate measurements from two unmatched groups (between-subjects design), use an independent (unpaired) samples test .
  • If you expect a difference between groups in a specific direction, use a one-tailed test .
  • If you don’t have any expectations for the direction of a difference between groups, use a two-tailed test .

The only parametric correlation test is Pearson’s r . The correlation coefficient ( r ) tells you the strength of a linear relationship between two quantitative variables.

However, to test whether the correlation in the sample is strong enough to be important in the population, you also need to perform a significance test of the correlation coefficient, usually a t test, to obtain a p value. This test uses your sample size to calculate how much the correlation coefficient differs from zero in the population.

You use a dependent-samples, one-tailed t test to assess whether the meditation exercise significantly improved math test scores. The test gives you:

  • a t value (test statistic) of 3.00
  • a p value of 0.0028

Although Pearson’s r is a test statistic, it doesn’t tell you anything about how significant the correlation is in the population. You also need to test whether this sample correlation coefficient is large enough to demonstrate a correlation in the population.

A t test can also determine how significantly a correlation coefficient differs from zero based on sample size. Since you expect a positive correlation between parental income and GPA, you use a one-sample, one-tailed t test. The t test gives you:

  • a t value of 3.08
  • a p value of 0.001

The final step of statistical analysis is interpreting your results.

Statistical significance

In hypothesis testing, statistical significance is the main criterion for forming conclusions. You compare your p value to a set significance level (usually 0.05) to decide whether your results are statistically significant or non-significant.

Statistically significant results are considered unlikely to have arisen solely due to chance. There is only a very low chance of such a result occurring if the null hypothesis is true in the population.

This means that you believe the meditation intervention, rather than random factors, directly caused the increase in test scores. Example: Interpret your results (correlational study) You compare your p value of 0.001 to your significance threshold of 0.05. With a p value under this threshold, you can reject the null hypothesis. This indicates a statistically significant correlation between parental income and GPA in male college students.

Note that correlation doesn’t always mean causation, because there are often many underlying factors contributing to a complex variable like GPA. Even if one variable is related to another, this may be because of a third variable influencing both of them, or indirect links between the two variables.

Effect size

A statistically significant result doesn’t necessarily mean that there are important real life applications or clinical outcomes for a finding.

In contrast, the effect size indicates the practical significance of your results. It’s important to report effect sizes along with your inferential statistics for a complete picture of your results. You should also report interval estimates of effect sizes if you’re writing an APA style paper .

With a Cohen’s d of 0.72, there’s medium to high practical significance to your finding that the meditation exercise improved test scores. Example: Effect size (correlational study) To determine the effect size of the correlation coefficient, you compare your Pearson’s r value to Cohen’s effect size criteria.

Decision errors

Type I and Type II errors are mistakes made in research conclusions. A Type I error means rejecting the null hypothesis when it’s actually true, while a Type II error means failing to reject the null hypothesis when it’s false.

You can aim to minimise the risk of these errors by selecting an optimal significance level and ensuring high power . However, there’s a trade-off between the two errors, so a fine balance is necessary.

Frequentist versus Bayesian statistics

Traditionally, frequentist statistics emphasises null hypothesis significance testing and always starts with the assumption of a true null hypothesis.

However, Bayesian statistics has grown in popularity as an alternative approach in the last few decades. In this approach, you use previous research to continually update your hypotheses based on your expectations and observations.

Bayes factor compares the relative strength of evidence for the null versus the alternative hypothesis rather than making a conclusion about rejecting the null hypothesis or not.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Statistical analysis is the main method for analyzing quantitative research data . It uses probabilities and models to test predictions about a population from sample data.

Is this article helpful?

Other students also liked, a quick guide to experimental design | 5 steps & examples, controlled experiments | methods & examples of control, between-subjects design | examples, pros & cons, more interesting articles.

  • Central Limit Theorem | Formula, Definition & Examples
  • Central Tendency | Understanding the Mean, Median & Mode
  • Correlation Coefficient | Types, Formulas & Examples
  • Descriptive Statistics | Definitions, Types, Examples
  • How to Calculate Standard Deviation (Guide) | Calculator & Examples
  • How to Calculate Variance | Calculator, Analysis & Examples
  • How to Find Degrees of Freedom | Definition & Formula
  • How to Find Interquartile Range (IQR) | Calculator & Examples
  • How to Find Outliers | Meaning, Formula & Examples
  • How to Find the Geometric Mean | Calculator & Formula
  • How to Find the Mean | Definition, Examples & Calculator
  • How to Find the Median | Definition, Examples & Calculator
  • How to Find the Range of a Data Set | Calculator & Formula
  • Inferential Statistics | An Easy Introduction & Examples
  • Levels of measurement: Nominal, ordinal, interval, ratio
  • Missing Data | Types, Explanation, & Imputation
  • Normal Distribution | Examples, Formulas, & Uses
  • Null and Alternative Hypotheses | Definitions & Examples
  • Poisson Distributions | Definition, Formula & Examples
  • Skewness | Definition, Examples & Formula
  • T-Distribution | What It Is and How To Use It (With Examples)
  • The Standard Normal Distribution | Calculator, Examples & Uses
  • Type I & Type II Errors | Differences, Examples, Visualizations
  • Understanding Confidence Intervals | Easy Examples & Formulas
  • Variability | Calculating Range, IQR, Variance, Standard Deviation
  • What is Effect Size and Why Does It Matter? (Examples)
  • What Is Interval Data? | Examples & Definition
  • What Is Nominal Data? | Examples & Definition
  • What Is Ordinal Data? | Examples & Definition
  • What Is Ratio Data? | Examples & Definition
  • What Is the Mode in Statistics? | Definition, Examples & Calculator

chrome icon

What is percentage in research?  

Percentage in research refers to the use of numbers expressed as a proportion of a whole to present findings and compare results. It is commonly used to report sensitivity, specificity, positive predictive value, and negative predictive value in scientific studies . However, it is important to ensure the clinical validity and mathematical accuracy of the percentages reported . Teaching percentages can be challenging, and research has explored different methods to improve understanding, such as using contextual situations and two-way tables . In the context of fundraising for the nonprofit sector, the percentage system allows taxpayers to allocate a portion of their personal income to support organizations . Experimental research has shown that the introduction of the percentage system does not significantly lower individual contributions and may even lead to a slight increase in some cases .

Answers from top 4 papers

Citation Count

Related Questions

See what other people are reading.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Gender pay gap in U.S. hasn’t changed much in two decades

The gender gap in pay has remained relatively stable in the United States over the past 20 years or so. In 2022, women earned an average of 82% of what men earned, according to a new Pew Research Center analysis of median hourly earnings of both full- and part-time workers. These results are similar to where the pay gap stood in 2002, when women earned 80% as much as men.

A chart showing that the Gender pay gap in the U.S. has not closed in recent years, but is narrower among young workers

As has long been the case, the wage gap is smaller for workers ages 25 to 34 than for all workers 16 and older. In 2022, women ages 25 to 34 earned an average of 92 cents for every dollar earned by a man in the same age group – an 8-cent gap. By comparison, the gender pay gap among workers of all ages that year was 18 cents.

While the gender pay gap has not changed much in the last two decades, it has narrowed considerably when looking at the longer term, both among all workers ages 16 and older and among those ages 25 to 34. The estimated 18-cent gender pay gap among all workers in 2022 was down from 35 cents in 1982. And the 8-cent gap among workers ages 25 to 34 in 2022 was down from a 26-cent gap four decades earlier.

The gender pay gap measures the difference in median hourly earnings between men and women who work full or part time in the United States. Pew Research Center’s estimate of the pay gap is based on an analysis of Current Population Survey (CPS) monthly outgoing rotation group files ( IPUMS ) from January 1982 to December 2022, combined to create annual files. To understand how we calculate the gender pay gap, read our 2013 post, “How Pew Research Center measured the gender pay gap.”

The COVID-19 outbreak affected data collection efforts by the U.S. government in its surveys, especially in 2020 and 2021, limiting in-person data collection and affecting response rates. It is possible that some measures of economic outcomes and how they vary across demographic groups are affected by these changes in data collection.

In addition to findings about the gender wage gap, this analysis includes information from a Pew Research Center survey about the perceived reasons for the pay gap, as well as the pressures and career goals of U.S. men and women. The survey was conducted among 5,098 adults and includes a subset of questions asked only for 2,048 adults who are employed part time or full time, from Oct. 10-16, 2022. Everyone who took part is a member of the Center’s American Trends Panel (ATP), an online survey panel that is recruited through national, random sampling of residential addresses. This way nearly all U.S. adults have a chance of selection. The survey is weighted to be representative of the U.S. adult population by gender, race, ethnicity, partisan affiliation, education and other categories. Read more about the ATP’s methodology .

Here are the questions used in this analysis, along with responses, and its methodology .

The  U.S. Census Bureau has also analyzed the gender pay gap, though its analysis looks only at full-time workers (as opposed to full- and part-time workers). In 2021, full-time, year-round working women earned 84% of what their male counterparts earned, on average, according to the Census Bureau’s most recent analysis.

Much of the gender pay gap has been explained by measurable factors such as educational attainment, occupational segregation and work experience. The narrowing of the gap over the long term is attributable in large part to gains women have made in each of these dimensions.

Related: The Enduring Grip of the Gender Pay Gap

Even though women have increased their presence in higher-paying jobs traditionally dominated by men, such as professional and managerial positions, women as a whole continue to be overrepresented in lower-paying occupations relative to their share of the workforce. This may contribute to gender differences in pay.

Other factors that are difficult to measure, including gender discrimination, may also contribute to the ongoing wage discrepancy.

Perceived reasons for the gender wage gap

A bar chart showing that Half of U.S. adults say women being treated differently by employers is a major reason for the gender wage gap

When asked about the factors that may play a role in the gender wage gap, half of U.S. adults point to women being treated differently by employers as a major reason, according to a Pew Research Center survey conducted in October 2022. Smaller shares point to women making different choices about how to balance work and family (42%) and working in jobs that pay less (34%).

There are some notable differences between men and women in views of what’s behind the gender wage gap. Women are much more likely than men (61% vs. 37%) to say a major reason for the gap is that employers treat women differently. And while 45% of women say a major factor is that women make different choices about how to balance work and family, men are slightly less likely to hold that view (40% say this).

Parents with children younger than 18 in the household are more likely than those who don’t have young kids at home (48% vs. 40%) to say a major reason for the pay gap is the choices that women make about how to balance family and work. On this question, differences by parental status are evident among both men and women.

Views about reasons for the gender wage gap also differ by party. About two-thirds of Democrats and Democratic-leaning independents (68%) say a major factor behind wage differences is that employers treat women differently, but far fewer Republicans and Republican leaners (30%) say the same. Conversely, Republicans are more likely than Democrats to say women’s choices about how to balance family and work (50% vs. 36%) and their tendency to work in jobs that pay less (39% vs. 30%) are major reasons why women earn less than men.

Democratic and Republican women are more likely than their male counterparts in the same party to say a major reason for the gender wage gap is that employers treat women differently. About three-quarters of Democratic women (76%) say this, compared with 59% of Democratic men. And while 43% of Republican women say unequal treatment by employers is a major reason for the gender wage gap, just 18% of GOP men share that view.

Pressures facing working women and men

Family caregiving responsibilities bring different pressures for working women and men, and research has shown that being a mother can reduce women’s earnings , while fatherhood can increase men’s earnings .

A chart showing that about two-thirds of U.S. working mothers feel a great deal of pressure to focus on responsibilities at home

Employed women and men are about equally likely to say they feel a great deal of pressure to support their family financially and to be successful in their jobs and careers, according to the Center’s October survey. But women, and particularly working mothers, are more likely than men to say they feel a great deal of pressure to focus on responsibilities at home.

About half of employed women (48%) report feeling a great deal of pressure to focus on their responsibilities at home, compared with 35% of employed men. Among working mothers with children younger than 18 in the household, two-thirds (67%) say the same, compared with 45% of working dads.

When it comes to supporting their family financially, similar shares of working moms and dads (57% vs. 62%) report they feel a great deal of pressure, but this is driven mainly by the large share of unmarried working mothers who say they feel a great deal of pressure in this regard (77%). Among those who are married, working dads are far more likely than working moms (60% vs. 43%) to say they feel a great deal of pressure to support their family financially. (There were not enough unmarried working fathers in the sample to analyze separately.)

About four-in-ten working parents say they feel a great deal of pressure to be successful at their job or career. These findings don’t differ by gender.

Gender differences in job roles, aspirations

A bar chart showing that women in the U.S. are more likely than men to say they're not the boss at their job - and don't want to be in the future

Overall, a quarter of employed U.S. adults say they are currently the boss or one of the top managers where they work, according to the Center’s survey. Another 33% say they are not currently the boss but would like to be in the future, while 41% are not and do not aspire to be the boss or one of the top managers.

Men are more likely than women to be a boss or a top manager where they work (28% vs. 21%). This is especially the case among employed fathers, 35% of whom say they are the boss or one of the top managers where they work. (The varying attitudes between fathers and men without children at least partly reflect differences in marital status and educational attainment between the two groups.)

In addition to being less likely than men to say they are currently the boss or a top manager at work, women are also more likely to say they wouldn’t want to be in this type of position in the future. More than four-in-ten employed women (46%) say this, compared with 37% of men. Similar shares of men (35%) and women (31%) say they are not currently the boss but would like to be one day. These patterns are similar among parents.

Note: This is an update of a post originally published on March 22, 2019. Anna Brown and former Pew Research Center writer/editor Amanda Barroso contributed to an earlier version of this analysis. Here are the questions used in this analysis, along with responses, and its methodology .

percentage analysis in research meaning

What is the gender wage gap in your metropolitan area? Find out with our pay gap calculator

  • Gender & Work
  • Gender Equality & Discrimination
  • Gender Pay Gap
  • Gender Roles

Carolina Aragão's photo

Carolina Aragão is a research associate focusing on social and demographic trends at Pew Research Center

Women have gained ground in the nation’s highest-paying occupations, but still lag behind men

Diversity, equity and inclusion in the workplace, the enduring grip of the gender pay gap, more than twice as many americans support than oppose the #metoo movement, women now outnumber men in the u.s. college-educated labor force, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

IMAGES

  1. Standard statistical tools in research and data analysis

    percentage analysis in research meaning

  2. PPT

    percentage analysis in research meaning

  3. Research Percentage of Research Goal Category by Analysis or Framework

    percentage analysis in research meaning

  4. PPT

    percentage analysis in research meaning

  5. Percentage Breakdown for Research Methods by Journal Type

    percentage analysis in research meaning

  6. Percentage analysis on selected questions of Part A

    percentage analysis in research meaning

VIDEO

  1. Range Percentages

  2. A basic math problem: 30% of 30%

  3. Percentage Meaning In Bengali /Percentage mane ki

  4. percentage problem| previous year question analysis #tnpscmaths #schoolmathssolutions

  5. "MEANING OF PERCENTAGE % IN CONVERSATION" #english #vocabulary #shorts #vocabwithanisha

  6. How to Calculate Simple Percentage Analysis in MS-Word

COMMENTS

  1. PERCENTAGE ANALYSIS

    percentage analysis CAMS Journal of Business Studies and Research ISSN : 0975-7953 July-September 68 It refers to a special kind of ratio. Percentage is used in making comparison between two or more series of data; percentages are used to determine relationship between the series if data finding the relative differences becomes easier through ...

  2. Introduction to Research Statistical Analysis: An Overview of the

    Introduction. Statistical analysis is necessary for any research project seeking to make quantitative conclusions. The following is a primer for research-based statistical analysis. It is intended to be a high-level overview of appropriate statistical testing, while not diving too deep into any specific methodology.

  3. Basic statistical tools in research and data analysis

    Abstract. Statistical methods involved in carrying out a study include planning, designing, collecting data, analysing, drawing meaningful interpretation and reporting of the research findings. The statistical analysis gives meaning to the meaningless numbers, thereby breathing life into a lifeless data. The results and inferences are precise ...

  4. The Beginner's Guide to Statistical Analysis

    This article is a practical introduction to statistical analysis for students and researchers. We'll walk you through the steps using two research examples. The first investigates a potential cause-and-effect relationship, while the second investigates a potential correlation between variables. Example: Causal research question.

  5. The Power of Percentages: An In-Depth Exploration of Percentages and

    Percentages help in identifying the trends, patterns, and gaps in the market. By using percentages, companies can understand consumer behavior and preferences. They can create informed business strategies targeted at specific demographics. Importance of Percentages in Sales Projections and Forecasting.

  6. Descriptive Statistics for Summarising Data

    Some software packages, such as SPSS, SYSTAT or NCSS, can report a specific percentage trimmed mean, if that option is selected for descriptive statistics or exploratory data analysis (see Procedure 5.6) procedures. Comparing the original mean with a trimmed mean can provide an indication of the degree to which the original mean has been biased ...

  7. Percentage Frequency Distribution

    A percentage frequency distribution, in general, is a display of data that indicates the percentage of observations for each data point or grouping of data points. It is a commonly used method for expressing the relative frequency of survey responses and other data. The percentage frequency distributions are often displayed as bar graphs, pie charts, or tables.The process of making a ...

  8. Percentages and Ratios: How Are They Different?

    Percentages and ratios help you understand the relationship between a slice to the whole. That is, they are proportional measures. They help you answer questions like, "Looking at the big picture, how important is this piece of the picture?" or "How does this amount compare to these other amounts?". Percentages and ratios summarize how ...

  9. Data Analysis in Research: Types & Methods

    Definition of research in data analysis: According to LeCompte and Schensul, research data analysis is a process used by researchers to reduce data to a story and interpret it to derive insights. The data analysis process helps reduce a large chunk of data into smaller fragments, which makes sense. Three essential things occur during the data ...

  10. What Is Statistical Analysis? (Definition, Methods)

    Statistical analysis is useful for research and decision making because it allows us to understand the world around us and draw conclusions by testing our assumptions. Statistical analysis is important for various applications, including: Statistical quality control and analysis in product development. Clinical trials.

  11. Statistics Notes: What is a percentage difference?

    Suppose we want to know the percentage difference in mean height of British adults aged 20 years, 177.3 cm for men and 163.6 cm for women, a difference of 13.7 cm. 1 Women are 100× (13.7/177.3)=7.7% shorter than men, whereas men are 100× (13.7/163.6)=8.4% taller than women. There are two percentage differences, depending on which sex is used ...

  12. Inferential Statistics

    Inferential statistics have two main uses: making estimates about populations (for example, the mean SAT score of all 11th graders in the US). testing hypotheses to draw conclusions about populations (for example, the relationship between SAT scores and family income).

  13. Method of Data Analysis

    For every research question, describe the descriptive statistic that is appropriate for answering the question. For every research hypothesis, describe the inferential statistic that is appropriate for analyzing the hypothesis. For simple statistics (e.g., percentage, mean, t-test), it is possible to also give the formula for the statistic.

  14. Research Methods

    To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations). Meta-analysis. Quantitative. To statistically analyze the results of a large collection of studies. Can only be applied to studies that collected data in a statistically valid manner.

  15. Exploratory Data Analysis: Frequencies, Descriptive Statistics

    Researchers must utilize exploratory data techniques to present findings to a target audience and create appropriate graphs and figures. Researchers can determine if outliers exist, data are missing, and statistical assumptions will be upheld by understanding data. Additionally, it is essential to comprehend these data when describing them in conclusions of a paper, in a meeting with ...

  16. What Is Statistical Analysis? Definition, Types, and Jobs

    Statistical analysis is the process of collecting large volumes of data and then using statistics and other data analysis techniques to identify trends, patterns, and insights. If you're a whiz at data and statistics, statistical analysis could be a great career match for you. The rise of big data, machine learning, and technology in our ...

  17. The Beginner's Guide to Statistical Analysis

    Table of contents. Step 1: Write your hypotheses and plan your research design. Step 2: Collect data from a sample. Step 3: Summarise your data with descriptive statistics. Step 4: Test hypotheses or make estimates with inferential statistics.

  18. How to Calculate a Percentage

    To calculate a percentage of a number, do the following: Convert the percentage to a decimal by dividing by 100. Multiply the decimal by the total amount. We know that 15% of the balloons are red, and this time we have a bag that contains 360 balloons. Let's calculate 15% of 360 to find the number of red balloons. 0.15 X 360 = 54.

  19. What is percentage in research?

    What is percentage in statistics? 5 answers Percentage in statistics is a way to express a value as a fraction of the whole. It is commonly used to compare data and express findings in studies. For example, percentages are used to compare the mean height of British adults aged 20 years, where women are 7.7% shorter than men and men are 8.4% taller than women.

  20. The Importance of Statistics in Research (With Examples)

    The field of statistics is concerned with collecting, analyzing, interpreting, and presenting data.. In the field of research, statistics is important for the following reasons: Reason 1: Statistics allows researchers to design studies such that the findings from the studies can be extrapolated to a larger population.. Reason 2: Statistics allows researchers to perform hypothesis tests to ...

  21. Descriptive Statistics

    There are 3 main types of descriptive statistics: The distribution concerns the frequency of each value. The central tendency concerns the averages of the values. The variability or dispersion concerns how spread out the values are. You can apply these to assess only one variable at a time, in univariate analysis, or to compare two or more, in ...

  22. Percentage

    percentage, a relative value indicating hundredth parts of any quantity. One percent (symbolized 1%) is a hundredth part; thus, 100 percent represents the entirety and 200 percent specifies twice the given quantity. For example, 1 percent of 1,000 chickens equals 1/100 of 1,000, or 10 chickens; 20 percent of the quantity is 20/100 1,000, or 200.

  23. Percentiles: Interpretations and Calculations

    By Jim Frost 65 Comments. Percentiles indicate the percentage of scores that fall below a particular value. They tell you where a score stands relative to other scores. For example, a person with an IQ of 120 is at the 91 st percentile, which indicates that their IQ is higher than 91 percent of other scores. Percentiles are a great tool to use ...

  24. Gender pay gap remained stable over past 20 years in US

    The gender gap in pay has remained relatively stable in the United States over the past 20 years or so. In 2022, women earned an average of 82% of what men earned, according to a new Pew Research Center analysis of median hourly earnings of both full- and part-time workers. These results are similar to where the pay gap stood in 2002, when women earned 80% as much as men.