data science hypothesis testing

  • Onsite training

3,000,000+ delegates

15,000+ clients

1,000+ locations

  • KnowledgePass
  • Log a ticket

01344203999 Available 24/7

data science hypothesis testing

Hypothesis Testing in Data Science: It's Usage and Types

Hypothesis Testing in Data Science is a crucial method for making informed decisions from data. This blog explores its essential usage in analysing trends and patterns, and the different types such as null, alternative, one-tailed, and two-tailed tests, providing a comprehensive understanding for both beginners and advanced practitioners.

stars

Exclusive 40% OFF

Training Outcomes Within Your Budget!

We ensure quality, budget-alignment, and timely delivery by our expert instructors.

Share this Resource

  • Advanced Data Science Certification
  • Data Science and Blockchain Training
  • Big Data Analysis
  • Python Data Science Course
  • Advanced Data Analytics Course

course

Table of Contents  

1) What is Hypothesis Testing in Data Science? 

2) Importance of Hypothesis Testing in Data Science 

3) Types of Hypothesis Testing 

4) Basic steps in Hypothesis Testing 

5) Real-world use cases of Hypothesis Testing 

6) Conclusion 

What is Hypothesis Testing in Data Science?  

Hypothesis Testing in Data Science is a statistical method used to assess the validity of assumptions or claims about a population based on sample data. It involves formulating two Hypotheses, the null Hypothesis (H0) and the alternative Hypothesis (Ha or H1), and then using statistical tests to find out if there is enough evidence to support the alternative Hypothesis.  

Hypothetical Testing is a critical tool for making data-driven decisions, evaluating the significance of observed effects or differences, and drawing meaningful conclusions from data, allowing Data Scientists to uncover patterns, relationships, and insights that inform various domains, from medicine to business and beyond. 

Unlock the power of data with our comprehensive Data Science & Analytics Training . Sign up now!  

Importance of Hypothesis Testing in Data Science  

The significance of Hypothesis Testing in Data Science cannot be overstated. It serves as the cornerstone of data-driven decision-making. By systematically testing Hypotheses, Data Scientists can: 

Importance of Hypothesis Testing in Data Science

Objective decision-making 

Hypothesis Testing provides a structured and impartial method for making decisions based on data. In a world where biases can skew perceptions, Data Scientists rely on this method to ensure that their conclusions are grounded in empirical evidence, making their decisions more objective and trustworthy. 

Statistical rigour 

Data Scientists deal with large amounts of data, and Hypothesis Testing helps them make sense of it. It quantifies the significance of observed patterns, differences, or relationships. This statistical rigour is essential in distinguishing between mere coincidences and meaningful findings, reducing the likelihood of making decisions based on random chance. 

Resource allocation 

Resources, whether they are financial, human, or time-related, are often limited. Hypothesis Testing enables efficient resource allocation by guiding Data Scientists towards strategies or interventions that are statistically significant. This ensures that efforts are directed where they are most likely to yield valuable results. 

Risk management 

In domains like healthcare and finance, where lives and livelihoods are at stake, Hypothesis Testing is a critical tool for risk assessment. For instance, in drug development, Hypothesis Testing is used to determine the safety and efficiency of new treatments, helping mitigate potential risks to patients. 

Innovation and progress 

Hypothesis Testing fosters innovation by providing a systematic framework to evaluate new ideas, products, or strategies. It encourages a cycle of experimentation, feedback, and improvement, driving continuous progress and innovation. 

Strategic decision-making 

Organisations base their strategies on data-driven insights. Hypothesis Testing enables them to make informed decisions about market trends, customer behaviour, and product development. These decisions are grounded in empirical evidence, increasing the likelihood of success. 

Scientific integrity 

In scientific research, Hypothesis Testing is integral to maintaining the integrity of research findings. It ensures that conclusions are drawn from rigorous statistical analysis rather than conjecture. This is essential for advancing knowledge and building upon existing research. 

Regulatory compliance 

Many industries, such as pharmaceuticals and aviation, operate under strict regulatory frameworks. Hypothesis Testing is essential for demonstrating compliance with safety and quality standards. It provides the statistical evidence required to meet regulatory requirements. 

Supercharge your data skills with our Big Data and Analytics Training – register now!  

Types of Hypothesis Testing  

Hypothesis Testing can be seen in several different types. In total, we have five types of Hypothesis Testing. They are described below as follows: 

Types of Hypothesis Testing

Alternative Hypothesis

The Alternative Hypothesis, denoted as Ha or H1, is the assertion or claim that researchers aim to support with their data analysis. It represents the opposite of the null Hypothesis (H0) and suggests that there is a significant effect, relationship, or difference in the population. In simpler terms, it's the statement that researchers hope to find evidence for during their analysis. For example, if you are testing a new drug's efficacy, the alternative Hypothesis might state that the drug has a measurable positive effect on patients' health. 

Null Hypothesis 

The Null Hypothesis, denoted as H0, is the default assumption in Hypothesis Testing. It posits that there is no significant effect, relationship, or difference in the population being studied. In other words, it represents the status quo or the absence of an effect. Researchers typically set out to challenge or disprove the Null Hypothesis by collecting and analysing data. Using the drug efficacy example again, the Null Hypothesis might state that the new drug has no effect on patients' health. 

Non-directional Hypothesis 

A Non-directional Hypothesis, also known as a two-tailed Hypothesis, is used when researchers are interested in whether there is any significant difference, effect, or relationship in either direction (positive or negative). This type of Hypothesis allows for the possibility of finding effects in both directions. For instance, in a study comparing the performance of two groups, a Non-directional Hypothesis would suggest that there is a significant difference between the groups, without specifying which group performs better. 

Directional Hypothesis 

A Directional Hypothesis, also called a one-tailed Hypothesis, is employed when researchers have a specific expectation about the direction of the effect, relationship, or difference they are investigating. In this case, the Hypothesis predicts an outcome in a particular direction—either positive or negative. For example, if you expect that a new teaching method will improve student test scores, a directional Hypothesis would state that the new method leads to higher test scores. 

Statistical Hypothesis 

A Statistical Hypothesis is a Hypothesis formulated in a way that it can be tested using statistical methods. It involves specific numerical values or parameters that can be measured or compared. Statistical Hypotheses are crucial for quantitative research and often involve means, proportions, variances, correlations, or other measurable quantities. These Hypotheses provide a precise framework for conducting statistical tests and drawing conclusions based on data analysis. 

Want to unlock the power of Big Data Analysis? Join our Big Data Analysis Course today!  

Basic steps in Hypothesis Testing  

Hypothesis Testing is a systematic approach used in statistics to make informed decisions based on data. It is a critical tool in Data Science, research, and many other fields where data analysis is employed. The following are the basic steps involved in Hypothesis Testing: 

Basic steps in Hypothesis Testing

1) Formulate Hypotheses 

The first step in Hypothesis Testing is to clearly define your research question and translate it into two mutually exclusive Hypotheses: 

a) Null Hypothesis (H0): This is the default assumption, often representing the status quo or the absence of an effect. It states that there is no significant difference, relationship, or effect in the population. 

b) Alternative Hypothesis (Ha or H1): This is the statement that contradicts the null Hypothesis. It suggests that there is a significant difference, relationship, or effect in the population. 

The formulation of these Hypotheses is crucial, as they serve as the foundation for your entire Hypothesis Testing process. 

2) Collect data 

With your Hypotheses in place, the next step is to gather relevant data through surveys, experiments, observations, or any other suitable method. The data collected should be representative of the population you are studying. The quality and quantity of data are essential factors in the success of your Hypothesis Testing. 

3) Choose a significance level (α) 

Before conducting the statistical test, you need to decide on the level of significance, denoted as α. The significance level represents the threshold for statistical significance and determines how confident you want to be in your results. A common choice is α = 0.05, which implies a 5% chance of making a Type I error (rejecting the null Hypothesis when it's true). You can choose a different α value based on the specific requirements of your analysis. 

4) Perform the test 

Based on the nature of your data and the Hypotheses you've formulated, select the appropriate statistical test. There are various tests available, including t-tests, chi-squared tests, ANOVA, regression analysis, and more. The chosen test should align with the type of data (e.g., continuous or categorical) and the research question (e.g., comparing means or testing for independence). 

Execute the selected statistical test on your data to obtain test statistics and p-values. The test statistics quantify the difference or effect you are investigating, while the p-value represents the probability of obtaining the observed results if the null Hypothesis were true. 

5) Analyse the results 

Once you have the test statistics and p-value, it's time to interpret the results. The primary focus is on the p-value: 

a) If the p-value is less than or equal to your chosen significance level (α), typically 0.05, you have evidence to reject the null Hypothesis. This shows that there is a significant difference, relationship, or effect in the population. 

b) If the p-value is more than α, you fail to reject the null Hypothesis, showing that there is insufficient evidence to support the alternative Hypothesis. 

6) Draw conclusions 

Based on the analysis of the p-value and the comparison to the significance level, you can draw conclusions about your research question: 

a) In case you reject the null Hypothesis, you can accept the alternative Hypothesis and make inferences based on the evidence provided by your data. 

b) In case you fail to reject the null Hypothesis, you do not accept the alternative Hypothesis, and you acknowledge that there is no significant evidence to support your claim. 

It's important to communicate your findings clearly, including the implications and limitations of your analysis. 

Real-world use cases of Hypothesis Testing  

The following are some of the real-world use cases of Hypothesis Testing. 

a) Medical research: Hypothesis Testing is crucial in determining the efficacy of new medications or treatments. For instance, in a clinical trial, researchers use Hypothesis Testing to assess whether a new drug is significantly more effective than a placebo in treating a particular condition. 

b) Marketing and advertising: Businesses employ Hypothesis Testing to evaluate the impact of marketing campaigns. A company may test whether a new advertising strategy leads to a significant increase in sales compared to the previous approach. 

c) Manufacturing and quality control: Manufacturing industries use Hypothesis Testing to ensure product quality. For example, in the automotive industry, Hypothesis Testing can be applied to test whether a new manufacturing process results in a significant reduction in defects. 

d) Education: In the field of education, Hypothesis Testing can be used to assess the effectiveness of teaching methods. Researchers may test whether a new teaching approach leads to statistically significant improvements in student performance. 

e) Finance and investment: Investment strategies are often evaluated using Hypothesis Testing. Investors may test whether a new investment strategy outperforms a benchmark index over a specified period.  

Big Data Analytics

Conclusion 

To sum it up, Hypothesis Testing in Data Science is a powerful tool that enables Data Scientists to make evidence-based decisions and draw meaningful conclusions from data. Understanding the types, methods, and steps involved in Hypothesis Testing is essential for any Data Scientist. By rigorously applying Hypothesis Testing techniques, you can gain valuable insights and drive informed decision-making in various domains. 

Want to take your Data Science skills to the next level? Join our Big Data Analytics & Data Science Integration Course now!  

Frequently Asked Questions

Upcoming data, analytics & ai resources batches & dates.

Fri 1st Nov 2024

Get A Quote

WHO WILL BE FUNDING THE COURSE?

My employer

By submitting your details you agree to be contacted in order to respond to your enquiry

  • Business Analysis
  • Lean Six Sigma Certification

Share this course

Our biggest summer sale.

red-star

We cannot process your enquiry without contacting you, please tick to confirm your consent to us for contacting you about your enquiry.

By submitting your details you agree to be contacted in order to respond to your enquiry.

We may not have the course you’re looking for. If you enquire or give us a call on 01344203999 and speak to our training experts, we may still be able to help with your training requirements.

Or select from our popular topics

  • ITIL® Certification
  • Scrum Certification
  • ISO 9001 Certification
  • Change Management Certification
  • Microsoft Azure Certification
  • Microsoft Excel Courses
  • Explore more courses

Press esc to close

Fill out your  contact details  below and our training experts will be in touch.

Fill out your   contact details   below

Thank you for your enquiry!

One of our training experts will be in touch shortly to go over your training requirements.

Back to Course Information

Fill out your contact details below so we can get in touch with you regarding your training requirements.

* WHO WILL BE FUNDING THE COURSE?

Preferred Contact Method

No preference

Back to course information

Fill out your  training details  below

Fill out your training details below so we have a better idea of what your training requirements are.

HOW MANY DELEGATES NEED TRAINING?

HOW DO YOU WANT THE COURSE DELIVERED?

Online Instructor-led

Online Self-paced

WHEN WOULD YOU LIKE TO TAKE THIS COURSE?

Next 2 - 4 months

WHAT IS YOUR REASON FOR ENQUIRING?

Looking for some information

Looking for a discount

I want to book but have questions

One of our training experts will be in touch shortly to go overy your training requirements.

Your privacy & cookies!

Like many websites we use cookies. We care about your data and experience, so to give you the best possible experience using our site, we store a very limited amount of your data. Continuing to use this site or clicking “Accept & close” means that you agree to our use of cookies. Learn more about our privacy policy and cookie policy cookie policy .

We use cookies that are essential for our site to work. Please visit our cookie policy for more information. To accept all cookies click 'Accept & close'.

Data Science from Scratch (ch7) - Hypothesis and Inference

Connecting probability and statistics to hypothesis testing and inference

Table of contents

  • Central Limit Theorem
  • Hypothesis Testing
  • Confidence Intervals
  • Connecting dots with Python

This is a continuation of my progress through Data Science from Scratch by Joel Grus. We’ll use a classic coin-flipping example in this post because it is simple to illustrate with both concept and code . The goal of this post is to connect the dots between several concepts including the Central Limit Theorem, Hypothesis Testing, p-Values and confidence intervals, using python to build our intuition.

Central_Limit_Theorem

Terms like “null” and “alternative” hypothesis are used quite frequently, so let’s set some context. The “null” is the default position. The “alternative”, alt for short, is something we’re comparing to the default (null).

The classic coin-flipping exercise is to test the fairness off a coin. If a coin is fair, it’ll land on heads 50% of the time (and tails 50% of the time). Let’s translate into hypothesis testing language:

Null Hypothesis : Probability of landing on Heads = 0.5.

Alt Hypothesis : Probability of landing on Heads != 0.5.

Each coin flip is a Bernoulli trial , which is an experiment with two outcomes - outcome 1, “success”, (probability p ) and outcome 0, “fail” (probability p - 1 ). The reason it’s a Bernoulli trial is because there are only two outcome with a coin flip (heads or tails). Read more about Bernoulli here .

Here’s the code for a single Bernoulli Trial:

When you sum the independent Bernoulli trials , you get a Binomial(n,p) random variable, a variable whose possible values have a probability distribution. The central limit theorem says as n or the number of independent Bernoulli trials get large, the Binomial distribution approaches a normal distribution.

Here’s the code for when you sum all the Bernoulli Trials to get a Binomial random variable:

Note : A single ‘success’ in a Bernoulli trial is ‘x’. Summing up all those x’s into X, is a Binomial random variable. Success doesn’t imply desirability, nor does “failure” imply undesirability. They’re just terms to count the cases we’re looking for (i.e., number of heads in multiple coin flips to assess a coin’s fairness).

Given that our null is (p = 0.5) and alt is (p != 0.5), we can run some independent bernoulli trials, then sum them up to get a binomial random variable.

independent_coin_flips

Each bernoulli_trial is an experiment with either 0 or 1 as outcomes. The binomial function sums up n bernoulli(0.5) trails. We ran both twice and got different results. Each bernoulli experiment can be a success(1) or faill(0); summing up into a binomial random variable means we’re taking the probability p(0.5) that a coin flips head and we ran the experiment 1,000 times to get a random binomial variable.

The first 1,000 flips we got 510. The second 1,000 flips we got 495. We can repeat this process many times to get a distribution . We can plot this distribution to reinforce our understanding. To this we’ll use binomial_histogram function. This function picks points from a Binomial(n,p) random variable and plots their histogram.

This plot is then rendered:

binomial_coin_fairness

What we did was sum up independent bernoulli_trial (s) of 1,000 coin flips, where the probability of head is p = 0.5, to create a binomial random variable. We then repeated this a large number of times (N = 10,000), then plotted a histogram of the distribution of all binomial random variables. And because we did it so many times, it approximates the standard normal distribution (smooth bell shape curve).

Just to demonstrate how this works, we can generate several binomial random variables:

several_binomial

If we do this 10,000 times, we’ll generate the above histogram. You’ll notice that because we are testing whether the coin is fair, the probability of heads (success) should be at 0.5 and, from 1,000 coin flips, the mean ( mu ) should be a 500.

We have another function that can help us calculate normal_approximation_to_binomial :

When calling the function with our parameters, we get a mean mu of 500 (from 1,000 coin flips) and a standard deviation sigma of 15.8114. Which means that 68% of the time, the binomial random variable will be 500 +/- 15.8114 and 95% of the time it’ll be 500 +/- 31.6228 (see 68-95-99.7 rule )

Hypothesis_Testing

Now that we have seen the results of our “coin fairness” experiment plotted on a binomial distribution (approximately normal), we will be, for the purpose of testing our hypothesis, be interested in the probability of its realized value (binomial random variable) lies within or outside a particular interval .

This means we’ll be interested in questions like:

  • What’s the probability that the binomial(n,p) is below a threshold?
  • Above a threshold?
  • Between an interval?
  • Outside an interval?

First, the normal_cdf (normal cummulative distribution function), which we learned in a previous post , is the probability of a variable being below a certain threshold.

Here, the probability of X (success or heads for a ‘fair coin’) is at 0.5 ( mu = 500, sigma = 15.8113), and we want to find the probability that X falls below 490, which comes out to roughly 26%

On the other hand, the normal_probability_above , probability that X falls above 490 would be 1 - 0.2635 = 0.7365 or roughly 74%.

To make sense of this we need to recall the binomal distribution, that approximates the normal distribution, but we’ll draw a vertical line at 490.

binomial_vline

We’re asking, given the binomal distribution with mu 500 and sigma at 15.8113, what is the probability that a binomal random variable falls below the threshold (left of the line); the answer is approximately 26% and correspondingly falling above the threshold (right of the line), is approximately 74%.

Between interval

We may also wonder what the probability of a binomial random variable falling between 490 and 520 :

binomial_2_vline

Here is the function to calculate this probability and it comes out to approximately 63%. note : Bear in mind the full area under the curve is 1.0 or 100%.

Finally, the area outside of the interval should be 1 - 0.6335 = 0.3665:

In addition to the above, we may also be interested in finding (symmetric) intervals around the mean that account for a certain level of likelihood , for example, 60% probability centered around the mean.

For this operation we would use the inverse_normal_cdf :

First we’d have to find the cutoffs where the upper and lower tails each contain 20% of the probability. We calculate normal_upper_bound and normal_lower_bound and use those to calculate the normal_two_sided_bounds .

So if we wanted to know what the cutoff points were for a 60% probability around the mean and standard deviation ( mu = 500, sigma = 15.8113), it would be between 486.69 and 513.31 .

Said differently, this means roughly 60% of the time, we can expect the binomial random variable to fall between 486 and 513.

Significance and Power

Now that we have a handle on the binomial normal distribution, thresholds (left and right of the mean), and cut-off points, we want to make a decision about significance . Probably the most important part of statistical significance is that it is a decision to be made, not a standard that is externally set.

Significance is a decision about how willing we are to make a type 1 error (false positive), which we explored in a previous post . The convention is to set it to a 5% or 1% willingness to make a type 1 error. Suppose we say 5%.

We would say that out of 1,000 coin flips, 95% of the time, we’d get between 469 and 531 heads on a “fair coin” and 5% of the time, outside of this 469-531 range.

If we recall our hypotheses:

Null Hypothesis : Probability of landing on Heads = 0.5 (fair coin)

Alt Hypothesis : Probability of landing on Heads != 0.5 (biased coin)

Each binomial distribution (test) that consist of 1,000 bernoulli trials, each test where the number of heads falls outside the range of 469-531, we’ll reject the null that the coin is fair. And we’ll be wrong (false positive), 5% of the time. It’s a false positive when we incorrectly reject the null hypothesis, when it’s actually true.

We also want to avoid making a type-2 error (false negative), where we fail to reject the null hypothesis, when it’s actually false.

Note : Its important to keep in mind that terms like significance and power are used to describe tests , in our case, the test of whether a coin is fair or not. Each test is the sum of 1,000 independent bernoulli trials.

For a “test” that has a 95% significance, we’ll assume that out of a 1,000 coin flips, it’ll land on heads between 469-531 times and we’ll determine the coin is fair. For the 5% of the time it lands outside of this range, we’ll determine the coin to be “unfair”, but we’ll be wrong because it actually is fair.

To calculate the power of the test, we’ll take the assumed mu and sigma with a 95% bounds (based on the assumption that the probability of the coin landing on heads is 0.5 or 50% - a fair coin). We’ll determine the lower and upper bounds:

And if the coin was actually biased , we should reject the null, but we fail to. Let’s suppose the actual probability that the coin lands on heads is 55% ( biased towards head):

Using the same range 469 - 531, where the coin is assumed ‘fair’ with mu at 500 and sigma at 15.8113:

95sig_binomial

If the coin, in fact, had a bias towards head (p = 0.55), the distribution would shift right, but if our 95% significance test remains the same, we get:

type2_error

The probability of making a type-2 error is 11.345%. This is the probability that we’re see that the coin’s distribution is within the previous interval 469-531, thinking we should accept the null hypothesis (that the coin is fair), but in actuality, failing to see that the distribution has shifted to the coin having a bias towards heads.

The other way to arrive at this is to find the probability, under the new mu and sigma (new distribution), that X (number of successes) will fall below 531.

So the probability of making a type-2 error or the probability that the new distribution falls below 531 is approximately 11.3%.

The power to detect a type-2 error is 1.00 minus the probability of a type-2 error (1 - 0.113 = 0.887), or 88.7%.

Finally, we may be interested in increasing power to detect a type-2 error. Instead of using a normal_two_sided_bounds function to find the cut-off points (i.e., 469 and 531), we could use a one-sided test that rejects the null hypothesis (‘fair coin’) when X (number of heads on a coin-flip) is much larger than 500.

Here’s the code, using normal_upper_bound :

This means shifting the upper bounds from 531 to 526, providing more probability in the upper tail. This means the probability of a type-2 error goes down from 11.3 to 6.3.

increase_power

And the new (stronger) power to detect type-2 error is 1.0 - 0.064 = 0.936 or 93.6% (up from 88.7% above).

p-Values represent another way of deciding whether to accept or reject the Null Hypothesis. Instead of choosing bounds, thresholds or cut-off points, we could compute the probability, assuming the Null Hypothesis is true, that we would see a value as extreme as the one we just observed.

Here is the code:

If we wanted to compute, assuming we have a “fair coin” ( mu = 500, sigma = 15.8113), what is the probability of seeing a value like 530? ( note : We use 529.5 instead of 530 below due to continuity correction )

Answer: approximately 6.2%

The p-value, 6.2% is higher than our (hypothetical) 5% significance, so we don’t reject the null. On the other hand, if X was slightly more extreme, 532, the probability of seeing that value would be approximately 4.3%, which is less than 5% significance, so we would reject the null.

For one-sided tests, we would use the normal_probability_above and normal_probability_below functions created above:

Under the two_sided_p_values test, the extreme value of 529.5 had a probability of 6.2% of showing up, but not low enough to reject the null hypothesis.

However, with a one-sided test, upper_p_value for the same threshold is now 3.1% and we would reject the null hypothesis.

Confidence_Intervals

A third approach to deciding whether to accept or reject the null is to use confidence intervals. We’ll use the 530 as we did in the p-Values example.

The confidence interval for a coin flipping heads 530 (out 1,000) times is (0.4991, 0.5609). Since this interval contains the p = 0.5 (probability of heads 50% of the time, assuming a fair coin), we do not reject the null.

If the extreme value were more extreme at 540, we would arrive at a different conclusion:

Here we would be 95% confident that the mean of this distribution is contained between 0.5091 and 0.5709 and this does not contain 0.500 (albiet by a slim margin), so we reject the null hypothesis that this is a fair coin.

note : Confidence intervals are about the interval not probability p. We interpret the confidence interval as, if you were to repeat the experiment many times, 95% of the time, the “true” parameter, in our example p = 0.5, would lie within the observed confidence interval.

Connecting_Dots

We used several python functions to build intuition around statistical hypothesis testing. To higlight this “from scratch” aspect of the book here is a diagram tying together the various python function used in this post:

connecting_dots

This post is part of an ongoing series where I document my progress through Data Science from Scratch by Joel Grus .

book_disclaimer

For more content on data science, machine learning, R, Python, SQL and more, find me on Twitter .

Paul Apivat

Paul Apivat

Cryptodata analyst ⛓️.

My interests include data science, machine learning and Python programming.

  • Statistics & Probability in Code
  • Data Science from Scratch (ch6) - Probability
  • How Positive are Your Facebook Posts?
  • Gradient Descent -- Data Science from Scratch (ch8)
  • Data Science from Scratch (ch5) - Statistics

banner-in1

  • Data Science

Hypothesis Testing in Data Science [Types, Process, Example]

Home Blog Data Science Hypothesis Testing in Data Science [Types, Process, Example]

Play icon

In day-to-day life, we come across a lot of data lot of variety of content. Sometimes the information is too much that we get confused about whether the information provided is correct or not. At that moment, we get introduced to a word called “Hypothesis testing” which helps in determining the proofs and pieces of evidence for some belief or information.  

What is Hypothesis Testing?

Hypothesis testing is an integral part of statistical inference. It is used to decide whether the given sample data from the population parameter satisfies the given hypothetical condition. So, it will predict and decide using several factors whether the predictions satisfy the conditions or not. In simpler terms, trying to prove whether the facts or statements are true or not.   

For example, if you predict that students who sit on the last bench are poorer and weaker than students sitting on 1st bench, then this is a hypothetical statement that needs to be clarified using different experiments. Another example we can see is implementing new business strategies to evaluate whether they will work for the business or not. All these things are very necessary when you work with data as a data scientist.  If you are interested in learning about data science, visit this amazing  Data Science full course   to learn data science.    

How is Hypothesis Testing Used in Data Science?

It is important to know how and where we can use hypothesis testing techniques in the field of data science. Data scientists predict a lot of things in their day-to-day work, and to check the probability of whether that finding is certain or not, we use hypothesis testing. The main goal of hypothesis testing is to gauge how well the predictions perform based on the sample data provided by the population. If you are interested to know more about the applications of the data, then refer to this  D ata  Scien ce course in India  which will give you more insights into application-based things. When data scientists work on model building using various machine learning algorithms, they need to have faith in their models and the forecasting of models. They then provide the sample data to the model for training purposes so that it can provide us with the significance of statistical data that will represent the entire population.  

Where and When to Use Hypothesis Test?

Hypothesis testing is widely used when we need to compare our results based on predictions. So, it will compare before and after results. For example, someone claimed that students writing exams from blue pen always get above 90%; now this statement proves it correct, and experiments need to be done. So, the data will be collected based on the student's input, and then the test will be done on the final result later after various experiments and observations on students' marks vs pen used, final conclusions will be made which will determine the results. Now hypothesis testing will be done to compare the 1st and the 2nd result, to see the difference and closeness of both outputs. This is how hypothesis testing is done.  

How Does Hypothesis Testing Work in Data Science?

In the whole data science life cycle, hypothesis testing is done in various stages, starting from the initial part, the 1st stage where the EDA, data pre-processing, and manipulation are done. In this stage, we will do our initial hypothesis testing to visualize the outcome in later stages. The next test will be done after we have built our model, once the model is ready and hypothesis testing is done, we will compare the results of the initial testing and the 2nd one to compare the results and significance of the results and to confirm the insights generated from the 1st cycle match with the 2nd one or not. This will help us know how the model responds to the sample training data. As we saw above, hypothesis testing is always needed when we are planning to contrast more than 2 groups. While checking on the results, it is important to check on the flexibility of the results for the sample and the population. Later, we can judge on the disagreement of the results are appropriate or vague. This is all we can do using hypothesis testing.   

Different Types of Hypothesis Testing

Hypothesis testing can be seen in several types. In total, we have 5 types of hypothesis testing. They are described below:

Hypothesis Testing

1. Alternative Hypothesis

The alternative hypothesis explains and defines the relationship between two variables. It simply indicates a positive relationship between two variables which means they do have a statistical bond. It indicates that the sample observed is going to influence or affect the outcome. An alternative hypothesis is described using H a  or H 1 . Ha indicates an alternative hypothesis and H 1  explains the possibility of influenced outcome which is 1. For example, children who study from the beginning of the class have fewer chances to fail. An alternate hypothesis will be accepted once the statistical predictions become significant. The alternative hypothesis can be further divided into 3 parts.   

  • Left-tailed: Left tailed hypothesis can be expected when the sample value is less than the true value.   
  • Right-tailed: Right-tailed hypothesis can be expected when the true value is greater than the outcome/predicted value.    
  • Two-tailed: Two-tailed hypothesis is defined when the true value is not equal to the sample value or the output.   

2. Null Hypothesis

The null hypothesis simply states that there is no relation between statistical variables. If the facts presented at the start do not match with the outcomes, then we can say, the testing is null hypothesis testing. The null hypothesis is represented as H 0 . For example, children who study from the beginning of the class have no fewer chances to fail. There are types of Null Hypothesis described below:   

Simple Hypothesis:  It helps in denoting and indicating the distribution of the population.   

Composite Hypothesis:  It does not denote the population distribution   

Exact Hypothesis:  In the exact hypothesis, the value of the hypothesis is the same as the sample distribution. Example- μ= 10   

Inexact Hypothesis:  Here, the hypothesis values are not equal to the sample. It will denote a particular range of values.   

3. Non-directional Hypothesis 

The non-directional hypothesis is a tow-tailed hypothesis that indicates the true value does not equal the predicted value. In simpler terms, there is no direction between the 2 variables. For an example of a non-directional hypothesis, girls and boys have different methodologies to solve a problem. Here the example explains that the thinking methodologies of a girl and a boy is different, they don’t think alike.    

4. Directional Hypothesis

In the Directional hypothesis, there is a direct relationship between two variables. Here any of the variables influence the other.   

5. Statistical Hypothesis

Statistical hypothesis helps in understanding the nature and character of the population. It is a great method to decide whether the values and the data we have with us satisfy the given hypothesis or not. It helps us in making different probabilistic and certain statements to predict the outcome of the population... We have several types of tests which are the T-test, Z-test, and Anova tests.  

Methods of Hypothesis Testing

1. frequentist hypothesis testing.

Frequentist hypotheses mostly work with the approach of making predictions and assumptions based on the current data which is real-time data. All the facts are based on current data. The most famous kind of frequentist approach is null hypothesis testing.    

2. Bayesian Hypothesis Testing

Bayesian testing is a modern and latest way of hypothesis testing. It is known to be the test that works with past data to predict the future possibilities of the hypothesis. In Bayesian, it refers to the prior distribution or prior probability samples for the observed data. In the medical Industry, we observe that Doctors deal with patients’ diseases using past historical records. So, with this kind of record, it is helpful for them to understand and predict the current and upcoming health conditions of the patient.

Importance of Hypothesis Testing in Data Science

Most of the time, people assume that data science is all about applying machine learning algorithms and getting results, that is true but in addition to the fact that to work in the data science field, one needs to be well versed with statistics as most of the background work in Data science is done through statistics. When we deal with data for pre-processing, manipulating, and analyzing, statistics play. Specifically speaking Hypothesis testing helps in making confident decisions, predicting the correct outcomes, and finding insightful conclusions regarding the population. Hypothesis testing helps us resolve tough things easily. To get more familiar with Hypothesis testing and other prediction models attend the superb useful  KnowledgeHut Data Science full course  which will give you more domain knowledge and will assist you in working with industry-related projects.          

Basic Steps in Hypothesis Testing [Workflow]

1. null and alternative hypothesis.

After we have done our initial research about the predictions that we want to find out if true, it is important to mention whether the hypothesis done is a null hypothesis(H0) or an alternative hypothesis (Ha). Once we understand the type of hypothesis, it will be easy for us to do mathematical research on it. A null hypothesis will usually indicate the no-relationship between the variables whereas an alternative hypothesis describes the relationship between 2 variables.    

  • H0 – Girls, on average, are not strong as boys   
  • Ha - Girls, on average are stronger than boys   

2. Data Collection

To prove our statistical test validity, it is essential and critical to check the data and proceed with sampling them to get the correct hypothesis results. If the target data is not prepared and ready, it will become difficult to make the predictions or the statistical inference on the population that we are planning to make. It is important to prepare efficient data, so that hypothesis findings can be easy to predict.   

3. Selection of an appropriate test statistic

To perform various analyses on the data, we need to choose a statistical test. There are various types of statistical tests available. Based on the wide spread of the data that is variance within the group or how different the data category is from one another that is variance without a group, we can proceed with our further research study.   

4. Selection of the appropriate significant level

Once we get the result and outcome of the statistical test, we have to then proceed further to decide whether the reject or accept the null hypothesis. The significance level is indicated by alpha (α). It describes the probability of rejecting or accepting the null hypothesis. Example- Suppose the value of the significance level which is alpha is 0.05. Now, this value indicates the difference from the null hypothesis. 

5. Calculation of the test statistics and the p-value

P value is simply the probability value and expected determined outcome which is at least as extreme and close as observed results of a hypothetical test. It helps in evaluating and verifying hypotheses against the sample data. This happens while assuming the null hypothesis is true. The lower the value of P, the higher and better will be the results of the significant value which is alpha (α). For example, if the P-value is 0.05 or even less than this, then it will be considered statistically significant. The main thing is these values are predicted based on the calculations done by deviating the values between the observed one and referenced one. The greater the difference between values, the lower the p-value will be.

6. Findings of the test

After knowing the P-value and statistical significance, we can determine our results and take the appropriate decision of whether to accept or reject the null hypothesis based on the facts and statistics presented to us.

How to Calculate Hypothesis Testing?

Hypothesis testing can be done using various statistical tests. One is Z-test. The formula for Z-test is given below:  

            Z = ( x̅  – μ 0 )  / (σ /√n)    

In the above equation, x̅ is the sample mean   

  • μ0 is the population mean   
  • σ is the standard deviation    
  • n is the sample size   

Now depending on the Z-test result, the examination will be processed further. The result is either going to be a null hypothesis or it is going to be an alternative hypothesis. That can be measured through below formula-   

  • H0: μ=μ0   
  • Ha: μ≠μ0   
  • Here,   
  • H0 = null hypothesis   
  • Ha = alternate hypothesis   

In this way, we calculate the hypothesis testing and can apply it to real-world scenarios.

Real-World Examples of Hypothesis Testing

Hypothesis testing has a wide variety of use cases that proves to be beneficial for various industries.    

1. Healthcare

In the healthcare industry, all the research and experiments which are done to predict the success of any medicine or drug are done successfully with the help of Hypothesis testing.   

2. Education sector

Hypothesis testing assists in experimenting with different teaching techniques to deal with the understanding capability of different students.   

3. Mental Health

Hypothesis testing helps in indicating the factors that may cause some serious mental health issues.   

4. Manufacturing

Testing whether the new change in the process of manufacturing helped in the improvement of the process as well as in the quantity or not.  In the same way, there are many other use cases that we get to see in different sectors for hypothesis testing. 

Error Terms in Hypothesis Testing

1. type-i error.

Type I error occurs during the process of hypothesis testing when the null hypothesis is rejected even though it is accurate. This kind of error is also known as False positive because even though the statement is positive or correct but results are given as false. For example, an innocent person still goes to jail because he is considered to be guilty.   

2. Type-II error

Type II error occurs during the process of hypothesis testing when the null hypothesis is not rejected even though it is inaccurate. This Kind of error is also called a False-negative which means even though the statements are false and inaccurate, it still says it is correct and doesn’t reject it. For example, a person is guilty, but in court, he has been proven innocent where he is guilty, so this is a Type II error.   

3. Level of Significance

The level of significance is majorly used to measure the confidence with which a null hypothesis can be rejected. It is the value with which one can reject the null hypothesis which is H0. The level of significance gauges whether the hypothesis testing is significant or not.   

P-value stands for probability value, which tells us the probability or likelihood to find the set of observations when the null hypothesis is true using statistical tests. The main purpose is to check the significance of the statistical statement.   

5. High P-Values

A higher P-value indicates that the testing is not statistically significant. For example, a P value greater than 0.05 is considered to be having higher P value. A higher P-value also means that our evidence and proofs are not strong enough to influence the population.

In hypothesis testing, each step is responsible for getting the outcomes and the results, whether it is the selection of statistical tests or working on data, each step contributes towards the better consequences of the hypothesis testing. It is always a recommendable step when planning for predicting the outcomes and trying to experiment with the sample; hypothesis testing is a useful concept to apply.   

Frequently Asked Questions (FAQs)

We can test a hypothesis by selecting a correct hypothetical test and, based on those getting results.   

Many statistical tests are used for hypothetical testing which includes Z-test, T-test, etc. 

Hypothesis helps us in doing various experiments and working on a specific research topic to predict the results.   

The null and alternative hypothesis, data collection, selecting a statistical test, selecting significance value, calculating p-value, check your findings.    

In simple words, parametric tests are purely based on assumptions whereas non-parametric tests are based on data that is collected and acquired from a sample.   

Profile

Gauri Guglani

Gauri Guglani works as a Data Analyst at Deloitte Consulting. She has done her major in Information Technology and holds great interest in the field of data science. She owns her technical skills as well as managerial skills and also is great at communicating. Since her undergraduate, Gauri has developed a profound interest in writing content and sharing her knowledge through the manual means of blog/article writing. She loves writing on topics affiliated with Statistics, Python Libraries, Machine Learning, Natural Language processes, and many more.

Avail your free 1:1 mentorship session.

Something went wrong

Upcoming Data Science Batches & Dates

NameDateFeeKnow more

Course advisor icon

logo

Introduction to Data Science I & II

Hypothesis testing, hypothesis testing #.

Dan L. Nicolae

Hypothesis testing can be thought of as a way to investigate the consistency of a dataset with a model, where a model is a set of rules that describe how data are generated. The consistency is evaluated using ideas from probability and probability distributions.

../_images/dgm.png

The consistency question in the above diagram is short for “Is it plausible that data was generated from this model?”

We will use a simple example to illustrate this. Suppose that a friend is telling you that she has an urn with 6 blue and 4 red balls from which 5 balls are extracted without replacement. The description in the previous sentence is that of a model with four rules:

there is an urn with 10 balls: 6 blue and 4 red;

a total of 5 balls are extracted;

the balls are extracted without replacement (once a ball is out of the urn, it cannot be selected again);

at each extraction, every ball in the urn has the same chance of being selected (this assumption is implicit in urn problems).

Suppose your friend reports the results of a drawing (these are the data) and here are two hypothetical scenarios (datasets):

Scenario 1: outcome is 5 red balls . Is this outcome consistent with the model above? The answer is clearly no as it is not possible to obtain 5 red balls when the first 3 rules above are true.

Scenario 2: outcome is 2 blue and 3 red balls . The answer here is not as obvious as above, but we can use probability to get an evaluation of how likely this outcome is. We will formalize this process in this chapter.

We will use these ideas in the next sections to answer questions that are more complicated: Is pollution associated with risk of cancer? Are weights of babies different for older mothers?

We end this introduction with examples of other data-generating models (so you can gain more insight before learning how to evaluate them):

A simple random sample of 10 voters from a population of size 10,000 where 40% of the subjects vote for candidate A, 35% for candidate B and 25% for candidate C.

Data from a binomial setting; this was introduced in the previous chapter where the binomial distribution comes from a sequence of Bernoulli trials that follow 4 rules: (i) a fixed number of trials; (ii) two possible outcomes for each trial; (iii) trials are independent; and (iv) the probability of success is the same for each trial

A set of 100 observations generated independently from a Unif(1,5) distribution.

  • Prompt Library
  • DS/AI Trends
  • Stats Tools
  • Interview Questions
  • Generative AI
  • Machine Learning
  • Deep Learning

Hypothesis Testing Steps & Examples

Hypothesis Testing Workflow

Table of Contents

What is a Hypothesis testing?

As per the definition from Oxford languages, a hypothesis is a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation. As per the Dictionary page on Hypothesis , Hypothesis means a proposition or set of propositions, set forth as an explanation for the occurrence of some specified group of phenomena, either asserted merely as a provisional conjecture to guide investigation (working hypothesis) or accepted as highly probable in the light of established facts.

The hypothesis can be defined as the claim that can either be related to the truth about something that exists in the world, or, truth about something that’s needs to be established a fresh . In simple words, another word for the hypothesis is the “claim” . Until the claim is proven to be true, it is called the hypothesis. Once the claim is proved, it becomes the new truth or new knowledge about the thing. For example , let’s say that a claim is made that students studying for more than 6 hours a day gets more than 90% of marks in their examination. Now, this is just a claim or a hypothesis and not the truth in the real world. However, in order for the claim to become the truth for widespread adoption, it needs to be proved using pieces of evidence, e.g., data.  In order to reject this claim or otherwise, one needs to do some empirical analysis by gathering data samples and evaluating the claim. The process of gathering data and evaluating the claims or hypotheses with the goal to reject or otherwise (failing to reject) can be called as hypothesis testing . Note the wordings – “failing to reject”. It means that we don’t have enough evidence to reject the claim. Thus, until the time that new evidence comes up, the claim can be considered the truth. There are different techniques to test the hypothesis in order to reach the conclusion of whether the hypothesis can be used to represent the truth of the world.

One must note that the hypothesis testing never constitutes a proof that the hypothesis is absolute truth based on the observations. It only provides added support to consider the hypothesis as truth until the time that new evidences can against the hypotheses can be gathered. We can never be 100% sure about truth related to those hypotheses based on the hypothesis testing.

Simply speaking, hypothesis testing is a framework that can be used to assert whether the claim or the hypothesis made about a real-world/real-life event can be seen as the truth or otherwise based on the given data (evidences).

Hypothesis Testing Examples

Before we get ahead and start understanding more details about hypothesis and hypothesis testing steps, lets take a look at some  real-world examples  of how to think about hypothesis and hypothesis testing when dealing with real-world problems :

  • Customers are churning because they ain’t getting response to their complaints or issues
  • Customers are churning because there are other competitive services in the market which are providing these services at lower cost.
  • Customers are churning because there are other competitive services which are providing more services at the same cost.
  • It is claimed that a 500 gm sugar packet for a particular brand, say XYZA, contains sugar of less than 500 gm, say around 480gm.  Can this claim be taken as truth? How do we know that this claim is true? This is a hypothesis until proved.
  • A group of doctors claims that quitting smoking increases lifespan. Can this claim be taken as new truth? The hypothesis is that quitting smoking results in an increase in lifespan.
  • It is claimed that brisk walking for half an hour every day reverses diabetes. In order to accept this in your lifestyle, you may need evidence that supports this claim or hypothesis.
  • It is claimed that doing Pranayama yoga for 30 minutes a day can help in easing stress by 50%. This can be termed as hypothesis and would require testing / validation for it to be established as a truth and recommended for widespread adoption.
  • One common real-life example of hypothesis testing is election polling. In order to predict the outcome of an election, pollsters take a sample of the population and ask them who they plan to vote for. They then use hypothesis testing to assess whether their sample is representative of the population as a whole. If the results of the hypothesis test are significant, it means that the sample is representative and that the poll can be used to predict the outcome of the election. However, if the results are not significant, it means that the sample is not representative and that the poll should not be used to make predictions.
  • Machine learning models make predictions based on the input data. Each of the machine learning model representing a function approximation can be taken as a hypothesis. All different models constitute what is called as hypothesis space .
  • As part of a linear regression machine learning model , it is claimed that there is a relationship between the response variables and predictor variables? Can this hypothesis or claim be taken as truth? Let’s say, the hypothesis is that the housing price depends upon the average income of people already staying in the locality. How true is this hypothesis or claim? The relationship between response variable and each of the predictor variables can be evaluated using T-test and T-statistics .
  • For linear regression model , one of the hypothesis is that there is no relationship between the response variable and any of the predictor variables. Thus, if b1, b2, b3 are three parameters, all of them is equal to 0. b1 = b2 = b3 = 0. This is where one performs F-test and use F-statistics to test this hypothesis.

You may note different hypotheses which are listed above. The next step would be validate some of these hypotheses. This is where data scientists will come into picture. One or more data scientists may be asked to work on different hypotheses. This would result in these data scientists looking for appropriate data related to the hypothesis they are working. This section will be detailed out in near future.

State the Hypothesis to begin Hypothesis Testing

The first step to hypothesis testing is defining or stating a hypothesis. Before the hypothesis can be tested, we need to formulate the hypothesis in terms of mathematical expressions. There are two important aspects to pay attention to, prior to the formulation of the hypothesis. The following represents different types of hypothesis that could be put to hypothesis testing:

  • Claim made against the well-established fact : The case in which a fact is well-established, or accepted as truth or “knowledge” and a new claim is made about this well-established fact. For example , when you buy a packet of 500 gm of sugar, you assume that the packet does contain at the minimum 500 gm of sugar and not any less, based on the label of 500 gm on the packet. In this case, the fact is given or assumed to be the truth. A new claim can be made that the 500 gm sugar contains sugar weighing less than 500 gm. This claim needs to be tested before it is accepted as truth. Such cases could be considered for hypothesis testing if this is claimed that the assumption or the default state of being is not true. The claim to be established as new truth can be stated as “alternate hypothesis”. The opposite state can be stated as “null hypothesis”. Here the claim that the 500 gm packet consists of sugar less than 500 grams would be stated as alternate hypothesis. The opposite state which is the sugar packet consists 500 gm is null hypothesis.
  • Claim to establish the new truth : The case in which there is some claim made about the reality that exists in the world (fact). For example , the fact that the housing price depends upon the average income of people already staying in the locality can be considered as a claim and not assumed to be true. Another example could be the claim that running 5 miles a day would result in a reduction of 10 kg of weight within a month. There could be varied such claims which when required to be proved as true have to go through hypothesis testing. The claim to be established as new truth can be stated as “alternate hypothesis”. The opposite state can be stated as “null hypothesis”. Running 5 miles a day would result in reduction of 10 kg within a month would be stated as alternate hypothesis.

Based on the above considerations, the following hypothesis can be stated for doing hypothesis testing.

  • The packet of 500 gm of sugar contains sugar of weight less than 500 gm. (Claim made against the established fact). This is a new knowledge which requires hypothesis testing to get established and acted upon.
  • The housing price depends upon the average income of the people staying in the locality. This is a new knowledge which requires hypothesis testing to get established and acted upon.
  • Running 5 miles a day results in a reduction of 10 kg of weight within a month. This is a new knowledge which requires hypothesis testing to get established for widespread adoption.

Formulate Null & Alternate Hypothesis as Next Step

Once the hypothesis is defined or stated, the next step is to formulate the null and alternate hypothesis in order to begin hypothesis testing as described above.

What is a null hypothesis?

In the case where the given statement is a well-established fact or default state of being in the real world, one can call it a null hypothesis (in the simpler word, nothing new). Well-established facts don’t need any hypothesis testing and hence can be called the null hypothesis. In cases, when there are any new claims made which is not well established in the real world, the null hypothesis can be thought of as the default state or opposite state of that claim. For example , in the previous section, the claim or hypothesis is made that the students studying for more than 6 hours a day gets more than 90% of marks in their examination. The null hypothesis, in this case, will be that the claim is not true or real. The null hypothesis can be stated that there is no relationship or association between the students reading more than 6 hours a day and they getting 90% of the marks. Any occurrence is only a chance occurrence. Another example of hypothesis is when somebody is alleged that they have performed a crime.

Null hypothesis is denoted by letter H with 0, e.g., [latex]H_0[/latex]

What is an alternate hypothesis?

When the given statement is a claim (unexpected event in the real world) and not yet proven, one can call/formulate it as an alternate hypothesis and accordingly define a null hypothesis which is the opposite state of the hypothesis. The alternate hypothesis is a new knowledge or truth that needs to be established. In simple words, the hypothesis or claim that needs to be tested against reality in the real world can be termed the alternate hypothesis. In order to reach a conclusion that the claim (alternate hypothesis) can be considered the new knowledge or truth (based on the available evidence), it would be important to reject the null hypothesis. It should be noted that null and alternate hypotheses are mutually exclusive and at the same time asymmetric. In the example given in the previous section, the claim that the students studying for more than 6 hours get more than 90% of marks can be termed as the alternate hypothesis.

Alternate hypothesis is denoted with H subscript a, e.g., [latex]H_a[/latex]

Once the hypothesis is formulated as null([latex]H_0[/latex]) and alternate hypothesis ([latex]H_a[/latex]), there are two possible outcomes that can happen from hypothesis testing. These outcomes are the following:

  • Reject the null hypothesis : There is enough evidence based on which one can reject the null hypothesis. Let’s understand this with the help of an example provided earlier in this section. The null hypothesis is that there is no relationship between the students studying more than 6 hours a day and getting more than 90% marks. In a sample of 30 students studying more than 6 hours a day, it was found that they scored 91% marks. Given that the null hypothesis is true, this kind of hypothesis testing result will be highly unlikely. This kind of result can’t happen by chance. That would mean that the claim can be taken as the new truth or new knowledge in the real world. One can go and take further samples of 30 students to perform some more testing to validate the hypothesis. If similar results show up with other tests, it can be said with very high confidence that there is enough evidence to reject the null hypothesis that there is no relationship between the students studying more than 6 hours a day and getting more than 90% marks. In such cases, one can go to accept the claim as new truth that the students studying more than 6 hours a day get more than 90% marks. The hypothesis can be considered the new truth until the time that new tests provide evidence against this claim.
  • Fail to reject the null hypothesis : There is not enough evidence-based on which one can reject the null hypothesis (well-established fact or reality). Thus, one would fail to reject the null hypothesis. In a sample of 30 students studying more than 6 hours a day, the students were found to score 75%. Given that the null hypothesis is true, this kind of result is fairly likely or expected. With the given sample, one can’t reject the null hypothesis that there is no relationship between the students studying more than 6 hours a day and getting more than 90% marks.

Examples of formulating the null and alternate hypothesis

The following are some examples of the null and alternate hypothesis.

The weight of the sugar packet is 500 gm. (A well-established fact)
The weight of the sugar packet is  500 gm.
Running 5 miles a day result in the reduction of 10 kg of weight within a month.
Running 5 miles a day results in the reduction of 10 kg of weight within a month.
The housing price depend upon the average income of people staying in the locality.
The housing price depends upon the average income of people staying in the locality.

Hypothesis Testing Steps

Here is the diagram which represents the workflow of Hypothesis Testing.

Hypothesis Testing Workflow

Figure 1. Hypothesis Testing Steps

Based on the above, the following are some of the  steps to be taken when doing hypothesis testing:

  • State the hypothesis : First and foremost, the hypothesis needs to be stated. The hypothesis could either be the statement that is assumed to be true or the claim which is made to be true.
  • Formulate the hypothesis : This step requires one to identify the Null and Alternate hypotheses or in simple words, formulate the hypothesis. Take an example of the canned sauce weighing 500 gm as the Null Hypothesis.
  • Set the criteria for a decision : Identify test statistics that could be used to assess the Null Hypothesis. The test statistics with the above example would be the average weight of the sugar packet, and t-statistics would be used to determine the P-value. For different kinds of problems, different kinds of statistics including Z-statistics, T-statistics, F-statistics, etc can be used.
  • Identify the level of significance (alpha) : Before starting the hypothesis testing, one would be required to set the significance level (also called as  alpha ) which represents the value for which a P-value less than or equal to  alpha  is considered statistically significant. Typical values of  alpha  are 0.1, 0.05, and 0.01. In case the P-value is evaluated as statistically significant, the null hypothesis is rejected. In case, the P-value is more than the  alpha  value, the null hypothesis is failed to be rejected.
  • Compute the test statistics : Next step is to calculate the test statistics (z-test, t-test, f-test, etc) to determine the P-value. If the sample size is more than 30, it is recommended to use z-statistics. Otherwise, t-statistics could be used. In the current example where 20 packets of canned sauce is selected for hypothesis testing, t-statistics will be calculated for the mean value of 505 gm (sample mean). The t-statistics would then be calculated as the difference of 505 gm (sample mean) and the population means (500 gm) divided by the sample standard deviation divided by the square root of sample size (20).
  • Calculate the P-value of the test statistics : Once the test statistics have been calculated, find the P-value using either of t-table or a z-table. P-value is the probability of obtaining a test statistic (t-score or z-score) equal to or more extreme than the result obtained from the sample data, given that the null hypothesis H0 is true.
  • Compare P-value with the level of significance : The significance level is set as the allowable range within which if the value appears, one will be failed to reject the Null Hypothesis. This region is also called as Non-rejection region . The value of alpha is compared with the p-value. If the p-value is less than the significance level, the test is statistically significant and hence, the null hypothesis will be rejected.

P-Value: Key to Statistical Hypothesis Testing

Once you formulate the hypotheses, there is the need to test those hypotheses. Meaning, say that the null hypothesis is stated as the statement that housing price does not depend upon the average income of people staying in the locality, it would be required to be tested by taking samples of housing prices and, based on the test results, this Null hypothesis could either be rejected or failed to be rejected . In hypothesis testing, the following two are the outcomes:

  • Reject the Null hypothesis
  • Fail to Reject the Null hypothesis

Take the above example of the sugar packet weighing 500 gm. The Null hypothesis is set as the statement that the sugar packet weighs 500 gm. After taking a sample of 20 sugar packets and testing/taking its weight, it was found that the average weight of the sugar packets came to 495 gm. The test statistics (t-statistics) were calculated for this sample and the P-value was determined. Let’s say the P-value was found to be 15%. Assuming that the level of significance is selected to be 5%, the test statistic is not statistically significant (P-value > 5%) and thus, the null hypothesis fails to get rejected. Thus, one could safely conclude that the sugar packet does weigh 500 gm. However, if the average weight of canned sauce would have found to be 465 gm, this is way beyond/away from the mean value of 500 gm and one could have ended up rejecting the Null Hypothesis based on the P-value .

Hypothesis Testing for Problem Analysis & Solution Implementation

Hypothesis testing can be applied in both problem analysis and solution implementation. The following represents method on how you can apply hypothesis testing technique for both problem and solution space:

  • Problem Analysis : Hypothesis testing is a systematic way to validate assumptions or educated guesses during problem analysis. It allows for a structured investigation into the nature of a problem and its potential root causes. In this process, a null hypothesis and an alternative hypothesis are usually defined. The null hypothesis generally asserts that no significant change or effect exists, while the alternative hypothesis posits the opposite. Through controlled experiments, data collection, or statistical analysis, these hypotheses are then tested to determine their validity. For example, if a software company notices a sudden increase in user churn rate, they might hypothesize that the recent update to their application is the root cause. The null hypothesis could be that the update has no effect on churn rate, while the alternative hypothesis would assert that the update significantly impacts the churn rate. By analyzing user behavior and feedback before and after the update, and perhaps running A/B tests where one user group has the update and another doesn’t, the company can test these hypotheses. If the alternative hypothesis is confirmed, the company can then focus on identifying specific issues in the update that may be causing the increased churn, thereby moving closer to a solution.
  • Solution Implementation : Hypothesis testing can also be a valuable tool during the solution implementation phase, serving as a method to evaluate the effectiveness of proposed remedies. By setting up a specific hypothesis about the expected outcome of a solution, organizations can create targeted metrics and KPIs to measure success. For example, if a retail business is facing low customer retention rates, they might implement a loyalty program as a solution. The hypothesis could be that introducing a loyalty program will increase customer retention by at least 15% within six months. The null hypothesis would state that the loyalty program has no significant effect on retention rates. To test this, the company can compare retention metrics from before and after the program’s implementation, possibly even setting up control groups for more robust analysis. By applying statistical tests to this data, the company can determine whether their hypothesis is confirmed or refuted, thereby gauging the effectiveness of their solution and making data-driven decisions for future actions.
  • Tests of Significance
  • Hypothesis testing for the Mean
  • z-statistics vs t-statistics (Khan Academy)

Hypothesis testing quiz

The claim that needs to be established is set as ____________, the outcome of hypothesis testing is _________.

Please select 2 correct answers

P-value is defined as the probability of obtaining the result as extreme given the null hypothesis is true

There is a claim that doing pranayama yoga results in reversing diabetes. which of the following is true about null hypothesis.

In this post, you learned about hypothesis testing and related nuances such as the null and alternate hypothesis formulation techniques, ways to go about doing hypothesis testing etc. In data science, one of the reasons why one needs to understand the concepts of hypothesis testing is the need to verify the relationship between the dependent (response) and independent (predictor) variables. One would, thus, need to understand the related concepts such as hypothesis formulation into null and alternate hypothesis, level of significance, test statistics calculation, P-value, etc. Given that the relationship between dependent and independent variables is a sort of hypothesis or claim , the null hypothesis could be set as the scenario where there is no relationship between dependent and independent variables.

Recent Posts

Ajitesh Kumar

  • Parametric vs Non-Parametric Models: Differences, Examples - August 11, 2024
  • How to know if Linear Regression Model is Appropriate? - August 11, 2024
  • Lasso Regression in Machine Learning: Python Example - August 10, 2024

Ajitesh Kumar

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

  • Search for:
  • Excellence Awaits: IITs, NITs & IIITs Journey

ChatGPT Prompts (250+)

  • Generate Design Ideas for App
  • Expand Feature Set of App
  • Create a User Journey Map for App
  • Generate Visual Design Ideas for App
  • Generate a List of Competitors for App
  • Parametric vs Non-Parametric Models: Differences, Examples
  • How to know if Linear Regression Model is Appropriate?
  • Lasso Regression in Machine Learning: Python Example
  • Completion Model vs Chat Model: Python Examples
  • LLM Hosting Strategy, Options & Cost: Examples

Data Science / AI Trends

  • • Prepend any arxiv.org link with talk2 to load the paper into a responsive chat application
  • • Custom LLM and AI Agents (RAG) On Structured + Unstructured Data - AI Brain For Your Organization
  • • Guides, papers, lecture, notebooks and resources for prompt engineering
  • • Common tricks to make LLMs efficient and stable
  • • Machine learning in finance

Free Online Tools

  • Create Scatter Plots Online for your Excel Data
  • Histogram / Frequency Distribution Creation Tool
  • Online Pie Chart Maker Tool
  • Z-test vs T-test Decision Tool
  • Independent samples t-test calculator

Recent Comments

I found it very helpful. However the differences are not too understandable for me

Very Nice Explaination. Thankyiu very much,

in your case E respresent Member or Oraganization which include on e or more peers?

Such a informative post. Keep it up

Thank you....for your support. you given a good solution for me.

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Hypothesis Testing | A Step-by-Step Guide with Easy Examples

Published on November 8, 2019 by Rebecca Bevans . Revised on June 22, 2023.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics . It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories.

There are 5 main steps in hypothesis testing:

  • State your research hypothesis as a null hypothesis and alternate hypothesis (H o ) and (H a  or H 1 ).
  • Collect data in a way designed to test the hypothesis.
  • Perform an appropriate statistical test .
  • Decide whether to reject or fail to reject your null hypothesis.
  • Present the findings in your results and discussion section.

Though the specific details might vary, the procedure you will use when testing a hypothesis will always follow some version of these steps.

Table of contents

Step 1: state your null and alternate hypothesis, step 2: collect data, step 3: perform a statistical test, step 4: decide whether to reject or fail to reject your null hypothesis, step 5: present your findings, other interesting articles, frequently asked questions about hypothesis testing.

After developing your initial research hypothesis (the prediction that you want to investigate), it is important to restate it as a null (H o ) and alternate (H a ) hypothesis so that you can test it mathematically.

The alternate hypothesis is usually your initial hypothesis that predicts a relationship between variables. The null hypothesis is a prediction of no relationship between the variables you are interested in.

  • H 0 : Men are, on average, not taller than women. H a : Men are, on average, taller than women.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

For a statistical test to be valid , it is important to perform sampling and collect data in a way that is designed to test your hypothesis. If your data are not representative, then you cannot make statistical inferences about the population you are interested in.

There are a variety of statistical tests available, but they are all based on the comparison of within-group variance (how spread out the data is within a category) versus between-group variance (how different the categories are from one another).

If the between-group variance is large enough that there is little or no overlap between groups, then your statistical test will reflect that by showing a low p -value . This means it is unlikely that the differences between these groups came about by chance.

Alternatively, if there is high within-group variance and low between-group variance, then your statistical test will reflect that with a high p -value. This means it is likely that any difference you measure between groups is due to chance.

Your choice of statistical test will be based on the type of variables and the level of measurement of your collected data .

  • an estimate of the difference in average height between the two groups.
  • a p -value showing how likely you are to see this difference if the null hypothesis of no difference is true.

Based on the outcome of your statistical test, you will have to decide whether to reject or fail to reject your null hypothesis.

In most cases you will use the p -value generated by your statistical test to guide your decision. And in most cases, your predetermined level of significance for rejecting the null hypothesis will be 0.05 – that is, when there is a less than 5% chance that you would see these results if the null hypothesis were true.

In some cases, researchers choose a more conservative level of significance, such as 0.01 (1%). This minimizes the risk of incorrectly rejecting the null hypothesis ( Type I error ).

The results of hypothesis testing will be presented in the results and discussion sections of your research paper , dissertation or thesis .

In the results section you should give a brief summary of the data and a summary of the results of your statistical test (for example, the estimated difference between group means and associated p -value). In the discussion , you can discuss whether your initial hypothesis was supported by your results or not.

In the formal language of hypothesis testing, we talk about rejecting or failing to reject the null hypothesis. You will probably be asked to do this in your statistics assignments.

However, when presenting research results in academic papers we rarely talk this way. Instead, we go back to our alternate hypothesis (in this case, the hypothesis that men are on average taller than women) and state whether the result of our test did or did not support the alternate hypothesis.

If your null hypothesis was rejected, this result is interpreted as “supported the alternate hypothesis.”

These are superficial differences; you can see that they mean the same thing.

You might notice that we don’t say that we reject or fail to reject the alternate hypothesis . This is because hypothesis testing is not designed to prove or disprove anything. It is only designed to test whether a pattern we measure could have arisen spuriously, or by chance.

If we reject the null hypothesis based on our research (i.e., we find that it is unlikely that the pattern arose by chance), then we can say our test lends support to our hypothesis . But if the pattern does not pass our decision rule, meaning that it could have arisen by chance, then we say the test is inconsistent with our hypothesis .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). Hypothesis Testing | A Step-by-Step Guide with Easy Examples. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/statistics/hypothesis-testing/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, choosing the right statistical test | types & examples, understanding p values | definition and examples, what is your plagiarism score.

  • Data Science
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Artificial Intelligence
  • AI ML DS Interview Series
  • AI ML DS Projects series
  • Data Engineering
  • Web Scrapping

Understanding Hypothesis Testing

Hypothesis testing involves formulating assumptions about population parameters based on sample statistics and rigorously evaluating these assumptions against empirical evidence. This article sheds light on the significance of hypothesis testing and the critical steps involved in the process.

What is Hypothesis Testing?

A hypothesis is an assumption or idea, specifically a statistical claim about an unknown population parameter. For example, a judge assumes a person is innocent and verifies this by reviewing evidence and hearing testimony before reaching a verdict.

Hypothesis testing is a statistical method that is used to make a statistical decision using experimental data. Hypothesis testing is basically an assumption that we make about a population parameter. It evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data. 

To test the validity of the claim or assumption about the population parameter:

  • A sample is drawn from the population and analyzed.
  • The results of the analysis are used to decide whether the claim is true or not.
Example: You say an average height in the class is 30 or a boy is taller than a girl. All of these is an assumption that we are assuming, and we need some statistical way to prove these. We need some mathematical conclusion whatever we are assuming is true.

Defining Hypotheses

  • Null hypothesis (H 0 ): In statistics, the null hypothesis is a general statement or default position that there is no relationship between two measured cases or no relationship among groups. In other words, it is a basic assumption or made based on the problem knowledge. Example : A company’s mean production is 50 units/per da H 0 : [Tex]\mu [/Tex] = 50.
  • Alternative hypothesis (H 1 ): The alternative hypothesis is the hypothesis used in hypothesis testing that is contrary to the null hypothesis.  Example: A company’s production is not equal to 50 units/per day i.e. H 1 : [Tex]\mu [/Tex] [Tex]\ne [/Tex] 50.

Key Terms of Hypothesis Testing

  • Level of significance : It refers to the degree of significance in which we accept or reject the null hypothesis. 100% accuracy is not possible for accepting a hypothesis, so we, therefore, select a level of significance that is usually 5%. This is normally denoted with  [Tex]\alpha[/Tex] and generally, it is 0.05 or 5%, which means your output should be 95% confident to give a similar kind of result in each sample.
  • P-value: The P value , or calculated probability, is the probability of finding the observed/extreme results when the null hypothesis(H0) of a study-given problem is true. If your P-value is less than the chosen significance level then you reject the null hypothesis i.e. accept that your sample claims to support the alternative hypothesis.
  • Test Statistic: The test statistic is a numerical value calculated from sample data during a hypothesis test, used to determine whether to reject the null hypothesis. It is compared to a critical value or p-value to make decisions about the statistical significance of the observed results.
  • Critical value : The critical value in statistics is a threshold or cutoff point used to determine whether to reject the null hypothesis in a hypothesis test.
  • Degrees of freedom: Degrees of freedom are associated with the variability or freedom one has in estimating a parameter. The degrees of freedom are related to the sample size and determine the shape.

Why do we use Hypothesis Testing?

Hypothesis testing is an important procedure in statistics. Hypothesis testing evaluates two mutually exclusive population statements to determine which statement is most supported by sample data. When we say that the findings are statistically significant, thanks to hypothesis testing. 

One-Tailed and Two-Tailed Test

One tailed test focuses on one direction, either greater than or less than a specified value. We use a one-tailed test when there is a clear directional expectation based on prior knowledge or theory. The critical region is located on only one side of the distribution curve. If the sample falls into this critical region, the null hypothesis is rejected in favor of the alternative hypothesis.

One-Tailed Test

There are two types of one-tailed test:

  • Left-Tailed (Left-Sided) Test: The alternative hypothesis asserts that the true parameter value is less than the null hypothesis. Example: H 0 ​: [Tex]\mu \geq 50 [/Tex] and H 1 : [Tex]\mu < 50 [/Tex]
  • Right-Tailed (Right-Sided) Test : The alternative hypothesis asserts that the true parameter value is greater than the null hypothesis. Example: H 0 : [Tex]\mu \leq50 [/Tex] and H 1 : [Tex]\mu > 50 [/Tex]

Two-Tailed Test

A two-tailed test considers both directions, greater than and less than a specified value.We use a two-tailed test when there is no specific directional expectation, and want to detect any significant difference.

Example: H 0 : [Tex]\mu = [/Tex] 50 and H 1 : [Tex]\mu \neq 50 [/Tex]

To delve deeper into differences into both types of test: Refer to link

What are Type 1 and Type 2 errors in Hypothesis Testing?

In hypothesis testing, Type I and Type II errors are two possible errors that researchers can make when drawing conclusions about a population based on a sample of data. These errors are associated with the decisions made regarding the null hypothesis and the alternative hypothesis.

  • Type I error: When we reject the null hypothesis, although that hypothesis was true. Type I error is denoted by alpha( [Tex]\alpha [/Tex] ).
  • Type II errors : When we accept the null hypothesis, but it is false. Type II errors are denoted by beta( [Tex]\beta [/Tex] ).


Null Hypothesis is True

Null Hypothesis is False

Null Hypothesis is True (Accept)

Correct Decision

Type II Error (False Negative)

Alternative Hypothesis is True (Reject)

Type I Error (False Positive)

Correct Decision

How does Hypothesis Testing work?

Step 1: define null and alternative hypothesis.

State the null hypothesis ( [Tex]H_0 [/Tex] ), representing no effect, and the alternative hypothesis ( [Tex]H_1 [/Tex] ​), suggesting an effect or difference.

We first identify the problem about which we want to make an assumption keeping in mind that our assumption should be contradictory to one another, assuming Normally distributed data.

Step 2 – Choose significance level

Select a significance level ( [Tex]\alpha [/Tex] ), typically 0.05, to determine the threshold for rejecting the null hypothesis. It provides validity to our hypothesis test, ensuring that we have sufficient data to back up our claims. Usually, we determine our significance level beforehand of the test. The p-value is the criterion used to calculate our significance value.

Step 3 – Collect and Analyze data.

Gather relevant data through observation or experimentation. Analyze the data using appropriate statistical methods to obtain a test statistic.

Step 4-Calculate Test Statistic

The data for the tests are evaluated in this step we look for various scores based on the characteristics of data. The choice of the test statistic depends on the type of hypothesis test being conducted.

There are various hypothesis tests, each appropriate for various goal to calculate our test. This could be a Z-test , Chi-square , T-test , and so on.

  • Z-test : If population means and standard deviations are known. Z-statistic is commonly used.
  • t-test : If population standard deviations are unknown. and sample size is small than t-test statistic is more appropriate.
  • Chi-square test : Chi-square test is used for categorical data or for testing independence in contingency tables
  • F-test : F-test is often used in analysis of variance (ANOVA) to compare variances or test the equality of means across multiple groups.

We have a smaller dataset, So, T-test is more appropriate to test our hypothesis.

T-statistic is a measure of the difference between the means of two groups relative to the variability within each group. It is calculated as the difference between the sample means divided by the standard error of the difference. It is also known as the t-value or t-score.

Step 5 – Comparing Test Statistic:

In this stage, we decide where we should accept the null hypothesis or reject the null hypothesis. There are two ways to decide where we should accept or reject the null hypothesis.

Method A: Using Crtical values

Comparing the test statistic and tabulated critical value we have,

  • If Test Statistic>Critical Value: Reject the null hypothesis.
  • If Test Statistic≤Critical Value: Fail to reject the null hypothesis.

Note: Critical values are predetermined threshold values that are used to make a decision in hypothesis testing. To determine critical values for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.

Method B: Using P-values

We can also come to an conclusion using the p-value,

  • If the p-value is less than or equal to the significance level i.e. ( [Tex]p\leq\alpha [/Tex] ), you reject the null hypothesis. This indicates that the observed results are unlikely to have occurred by chance alone, providing evidence in favor of the alternative hypothesis.
  • If the p-value is greater than the significance level i.e. ( [Tex]p\geq \alpha[/Tex] ), you fail to reject the null hypothesis. This suggests that the observed results are consistent with what would be expected under the null hypothesis.

Note : The p-value is the probability of obtaining a test statistic as extreme as, or more extreme than, the one observed in the sample, assuming the null hypothesis is true. To determine p-value for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.

Step 7- Interpret the Results

At last, we can conclude our experiment using method A or B.

Calculating test statistic

To validate our hypothesis about a population parameter we use statistical functions . We use the z-score, p-value, and level of significance(alpha) to make evidence for our hypothesis for normally distributed data .

1. Z-statistics:

When population means and standard deviations are known.

[Tex]z = \frac{\bar{x} – \mu}{\frac{\sigma}{\sqrt{n}}}[/Tex]

  • [Tex]\bar{x} [/Tex] is the sample mean,
  • μ represents the population mean, 
  • σ is the standard deviation
  • and n is the size of the sample.

2. T-Statistics

T test is used when n<30,

t-statistic calculation is given by:

[Tex]t=\frac{x̄-μ}{s/\sqrt{n}} [/Tex]

  • t = t-score,
  • x̄ = sample mean
  • μ = population mean,
  • s = standard deviation of the sample,
  • n = sample size

3. Chi-Square Test

Chi-Square Test for Independence categorical Data (Non-normally distributed) using:

[Tex]\chi^2 = \sum \frac{(O_{ij} – E_{ij})^2}{E_{ij}}[/Tex]

  • [Tex]O_{ij}[/Tex] is the observed frequency in cell [Tex]{ij} [/Tex]
  • i,j are the rows and columns index respectively.
  • [Tex]E_{ij}[/Tex] is the expected frequency in cell [Tex]{ij}[/Tex] , calculated as : [Tex]\frac{{\text{{Row total}} \times \text{{Column total}}}}{{\text{{Total observations}}}}[/Tex]

Real life Examples of Hypothesis Testing

Let’s examine hypothesis testing using two real life situations,

Case A: D oes a New Drug Affect Blood Pressure?

Imagine a pharmaceutical company has developed a new drug that they believe can effectively lower blood pressure in patients with hypertension. Before bringing the drug to market, they need to conduct a study to assess its impact on blood pressure.

  • Before Treatment: 120, 122, 118, 130, 125, 128, 115, 121, 123, 119
  • After Treatment: 115, 120, 112, 128, 122, 125, 110, 117, 119, 114

Step 1 : Define the Hypothesis

  • Null Hypothesis : (H 0 )The new drug has no effect on blood pressure.
  • Alternate Hypothesis : (H 1 )The new drug has an effect on blood pressure.

Step 2: Define the Significance level

Let’s consider the Significance level at 0.05, indicating rejection of the null hypothesis.

If the evidence suggests less than a 5% chance of observing the results due to random variation.

Step 3 : Compute the test statistic

Using paired T-test analyze the data to obtain a test statistic and a p-value.

The test statistic (e.g., T-statistic) is calculated based on the differences between blood pressure measurements before and after treatment.

t = m/(s/√n)

  • m  = mean of the difference i.e X after, X before
  • s  = standard deviation of the difference (d) i.e d i ​= X after, i ​− X before,
  • n  = sample size,

then, m= -3.9, s= 1.8 and n= 10

we, calculate the , T-statistic = -9 based on the formula for paired t test

Step 4: Find the p-value

The calculated t-statistic is -9 and degrees of freedom df = 9, you can find the p-value using statistical software or a t-distribution table.

thus, p-value = 8.538051223166285e-06

Step 5: Result

  • If the p-value is less than or equal to 0.05, the researchers reject the null hypothesis.
  • If the p-value is greater than 0.05, they fail to reject the null hypothesis.

Conclusion: Since the p-value (8.538051223166285e-06) is less than the significance level (0.05), the researchers reject the null hypothesis. There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.

Python Implementation of Case A

Let’s create hypothesis testing with python, where we are testing whether a new drug affects blood pressure. For this example, we will use a paired T-test. We’ll use the scipy.stats library for the T-test.

Scipy is a mathematical library in Python that is mostly used for mathematical equations and computations.

We will implement our first real life problem via python,

import numpy as np from scipy import stats # Data before_treatment = np . array ([ 120 , 122 , 118 , 130 , 125 , 128 , 115 , 121 , 123 , 119 ]) after_treatment = np . array ([ 115 , 120 , 112 , 128 , 122 , 125 , 110 , 117 , 119 , 114 ]) # Step 1: Null and Alternate Hypotheses # Null Hypothesis: The new drug has no effect on blood pressure. # Alternate Hypothesis: The new drug has an effect on blood pressure. null_hypothesis = "The new drug has no effect on blood pressure." alternate_hypothesis = "The new drug has an effect on blood pressure." # Step 2: Significance Level alpha = 0.05 # Step 3: Paired T-test t_statistic , p_value = stats . ttest_rel ( after_treatment , before_treatment ) # Step 4: Calculate T-statistic manually m = np . mean ( after_treatment - before_treatment ) s = np . std ( after_treatment - before_treatment , ddof = 1 ) # using ddof=1 for sample standard deviation n = len ( before_treatment ) t_statistic_manual = m / ( s / np . sqrt ( n )) # Step 5: Decision if p_value <= alpha : decision = "Reject" else : decision = "Fail to reject" # Conclusion if decision == "Reject" : conclusion = "There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different." else : conclusion = "There is insufficient evidence to claim a significant difference in average blood pressure before and after treatment with the new drug." # Display results print ( "T-statistic (from scipy):" , t_statistic ) print ( "P-value (from scipy):" , p_value ) print ( "T-statistic (calculated manually):" , t_statistic_manual ) print ( f "Decision: { decision } the null hypothesis at alpha= { alpha } ." ) print ( "Conclusion:" , conclusion )

T-statistic (from scipy): -9.0 P-value (from scipy): 8.538051223166285e-06 T-statistic (calculated manually): -9.0 Decision: Reject the null hypothesis at alpha=0.05. Conclusion: There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.

In the above example, given the T-statistic of approximately -9 and an extremely small p-value, the results indicate a strong case to reject the null hypothesis at a significance level of 0.05. 

  • The results suggest that the new drug, treatment, or intervention has a significant effect on lowering blood pressure.
  • The negative T-statistic indicates that the mean blood pressure after treatment is significantly lower than the assumed population mean before treatment.

Case B : Cholesterol level in a population

Data: A sample of 25 individuals is taken, and their cholesterol levels are measured.

Cholesterol Levels (mg/dL): 205, 198, 210, 190, 215, 205, 200, 192, 198, 205, 198, 202, 208, 200, 205, 198, 205, 210, 192, 205, 198, 205, 210, 192, 205.

Populations Mean = 200

Population Standard Deviation (σ): 5 mg/dL(given for this problem)

Step 1: Define the Hypothesis

  • Null Hypothesis (H 0 ): The average cholesterol level in a population is 200 mg/dL.
  • Alternate Hypothesis (H 1 ): The average cholesterol level in a population is different from 200 mg/dL.

As the direction of deviation is not given , we assume a two-tailed test, and based on a normal distribution table, the critical values for a significance level of 0.05 (two-tailed) can be calculated through the z-table and are approximately -1.96 and 1.96.

The test statistic is calculated by using the z formula Z = [Tex](203.8 – 200) / (5 \div \sqrt{25}) [/Tex] ​ and we get accordingly , Z =2.039999999999992.

Step 4: Result

Since the absolute value of the test statistic (2.04) is greater than the critical value (1.96), we reject the null hypothesis. And conclude that, there is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL

Python Implementation of Case B

import scipy.stats as stats import math import numpy as np # Given data sample_data = np . array ( [ 205 , 198 , 210 , 190 , 215 , 205 , 200 , 192 , 198 , 205 , 198 , 202 , 208 , 200 , 205 , 198 , 205 , 210 , 192 , 205 , 198 , 205 , 210 , 192 , 205 ]) population_std_dev = 5 population_mean = 200 sample_size = len ( sample_data ) # Step 1: Define the Hypotheses # Null Hypothesis (H0): The average cholesterol level in a population is 200 mg/dL. # Alternate Hypothesis (H1): The average cholesterol level in a population is different from 200 mg/dL. # Step 2: Define the Significance Level alpha = 0.05 # Two-tailed test # Critical values for a significance level of 0.05 (two-tailed) critical_value_left = stats . norm . ppf ( alpha / 2 ) critical_value_right = - critical_value_left # Step 3: Compute the test statistic sample_mean = sample_data . mean () z_score = ( sample_mean - population_mean ) / \ ( population_std_dev / math . sqrt ( sample_size )) # Step 4: Result # Check if the absolute value of the test statistic is greater than the critical values if abs ( z_score ) > max ( abs ( critical_value_left ), abs ( critical_value_right )): print ( "Reject the null hypothesis." ) print ( "There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL." ) else : print ( "Fail to reject the null hypothesis." ) print ( "There is not enough evidence to conclude that the average cholesterol level in the population is different from 200 mg/dL." )

Reject the null hypothesis. There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL.

Limitations of Hypothesis Testing

  • Although a useful technique, hypothesis testing does not offer a comprehensive grasp of the topic being studied. Without fully reflecting the intricacy or whole context of the phenomena, it concentrates on certain hypotheses and statistical significance.
  • The accuracy of hypothesis testing results is contingent on the quality of available data and the appropriateness of statistical methods used. Inaccurate data or poorly formulated hypotheses can lead to incorrect conclusions.
  • Relying solely on hypothesis testing may cause analysts to overlook significant patterns or relationships in the data that are not captured by the specific hypotheses being tested. This limitation underscores the importance of complimenting hypothesis testing with other analytical approaches.

Hypothesis testing stands as a cornerstone in statistical analysis, enabling data scientists to navigate uncertainties and draw credible inferences from sample data. By systematically defining null and alternative hypotheses, choosing significance levels, and leveraging statistical tests, researchers can assess the validity of their assumptions. The article also elucidates the critical distinction between Type I and Type II errors, providing a comprehensive understanding of the nuanced decision-making process inherent in hypothesis testing. The real-life example of testing a new drug’s effect on blood pressure using a paired T-test showcases the practical application of these principles, underscoring the importance of statistical rigor in data-driven decision-making.

Frequently Asked Questions (FAQs)

1. what are the 3 types of hypothesis test.

There are three types of hypothesis tests: right-tailed, left-tailed, and two-tailed. Right-tailed tests assess if a parameter is greater, left-tailed if lesser. Two-tailed tests check for non-directional differences, greater or lesser.

2.What are the 4 components of hypothesis testing?

Null Hypothesis ( [Tex]H_o [/Tex] ): No effect or difference exists. Alternative Hypothesis ( [Tex]H_1 [/Tex] ): An effect or difference exists. Significance Level ( [Tex]\alpha [/Tex] ): Risk of rejecting null hypothesis when it’s true (Type I error). Test Statistic: Numerical value representing observed evidence against null hypothesis.

3.What is hypothesis testing in ML?

Statistical method to evaluate the performance and validity of machine learning models. Tests specific hypotheses about model behavior, like whether features influence predictions or if a model generalizes well to unseen data.

4.What is the difference between Pytest and hypothesis in Python?

Pytest purposes general testing framework for Python code while Hypothesis is a Property-based testing framework for Python, focusing on generating test cases based on specified properties of the code.

Please Login to comment...

Similar reads.

  • data-science

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Hypothesis tests #

Formal hypothesis testing is perhaps the most prominent and widely-employed form of statistical analysis. It is sometimes seen as the most rigorous and definitive part of a statistical analysis, but it is also the source of many statistical controversies. The currently-prevalent approach to hypothesis testing dates to developments that took place between 1925 and 1940, especially the work of Ronald Fisher , Jerzy Neyman , and Egon Pearson .

In recent years, many prominent statisticians have argued that less emphasis should be placed on the formal hypothesis testing approaches developed in the early twentieth century, with a correspondingly greater emphasis on other forms of uncertainty analysis. Our goal here is to give an overview of some of the well-established and widely-used approaches for hypothesis testing. We will also provide some perspectives on how these tools can be effectively used, and discuss their limitations. We will also discuss some new approaches to hypothesis testing that may eventually come to be as prominent as these classical approaches.

A falsifiable hypothesis is a statement, or hypothesis, that can be contradicted with evidence. In empirical (data-driven) research, this evidence will always be obtained through the data. In statistical hypothesis testing, the hypothesis that we formally test is called the null hypothesis . The alternative hypothesis is a second hypothesis that is our proposed explanation for what happens if the null hypothesis is wrong.

Test statistics #

The key element of a statistical hypothesis test is the test statistic , which (like any statistic) is a function of the data. A test statistic takes our entire dataset, and reduces it to one number. This one number ideally should contain all the information in the data that is relevant for assessing the two hypotheses of interest, and exclude any aspects of the data that are irrelevant for assessing the two hypotheses. The test statistic measures evidence against the null hypothesis. Most test statistics are constructed so that a value of zero represents the lowest possible level of evidence against the null hypothesis. Test statistic values that deviate from zero represent greater levels of evidence against the null hypothesis. The larger the magnitude of the test statistic, the stronger the evidence against the null hypothesis.

A major theme of statistical research is to devise effective ways to construct test statistics. Many useful ways to do this have been devised, and there is no single approach that is always the best. In this introductory course, we will focus on tests that starting with an estimate of a quantity that is relevant for assessing the hypotheses, then proceed by standardizing this estimate by dividing it by its standard error. This approach is sometimes referred to as “Wald testing”, after Abraham Wald .

Testing the equality of two proportions #

As a basic example, let’s consider risk perception related to COVID-19. As you will see below, hypothesis testing can appear at first to be a fairly elaborate exercise. Using this example, we describe each aspect of this exercise in detail below.

The data and research question #

The data shown below are simulated but are designed to reflect actual surveys conducted in the United States in March of 2020. Partipants were asked whether they perceive that they have a substantial risk of dying if they are infected with the novel coronavirus. The number of people stating each response, stratified on age, are shown below (only two age groups are shown):

High risk Not high risk
Age < 30 25 202
Age 60-69 30 124

Each subject’s response is binary – they either perceive themselves to be high risk, or not to be at high risk. When working with this type of data, we are usually interested in the proportion of people who provide each response within each stratum (age group). These are conditional proportions, conditioning on the age group. The numerical values of the conditional proportions are given below:

High risk Not high risk
Age < 30 0.110 0.890
Age 60-69 0.195 0.805

There are four conditional proportions in the table above – the proportion of younger people who perceive themselves to be at higher risk, 0.110=25/(25+202); the proportion of younger people who do not perceive themselves to be at high risk, 0.890=202/(25+202); the proportion of older people who perceive themselves to be at high risk 0.195=30/(30+124); and the proportion of older people who do not perceive themselves to be at high risk, 0.805=124/(30+124).

The trend in the data is that younger people perceive themselves to be at lower risk of dying than older people, by a difference of 0.195-0.110=0.085 (in terms of proportions). But is this trend only present in this sample, or is it generalizable to a broader population (say the entire US population)? That is the goal of conducting a statistical hypothesis test in this setting.

The population structure #

Corresponding to our data above is the unobserved population structure, which we can denote as follows

High risk Not high risk
Age < 30 \(p\) \(1-p\)
Age 60-69 \(q\) \(1-q\)

The symbols \(p\) and \(q\) in the table above are population parameters . These are quantitites that we do not know, and wish to assess using the data. In this case, our null hypothesis can be expressed as the statement \(p = q\) . We can estimate \(p\) using the sample proportion \(\hat{p} = 0.110\) , and similarly estimate \(q\) using \(\hat{q} = 0.195\) . However these estimates do not immediately provide us with a way of expressing the evidence relating to the hypothesis that \(p=q\) . This is provided by the test statistic.

A test statistic #

As noted above, a test statistic is a reduction of the data to one number that captures all of the relevant information for assessing the hypotheses. A natural first choice for a test statistic here would be the difference in sample proportions between the two age groups, which is 0.195 - 0.110 = 0.085. There is a difference of 0.085 between the perceived risks of death in the younger and older age groups.

The difference in rates (0.085) does not on its own make a good test statistic, although it is a good start toward obtaining one. The reason for this is that the evidence underlying this difference in rates depends also on the absolute rates (0.110 and 0.195), and on the sample sizes (227 and 154). If we only know that the difference in rates is 0.085, this is not sufficient to evaluate the hypothesis in a statistical manner. A given difference in rates is much stronger evidence if it is obtained from a larger sample. If we have a difference of 0.085 with a very large sample, say one million people, then we should be almost certain that the true rates differ (i.e. the data are highly incompatiable with the hypothesis that \(p=q\) ). If we have the same difference in rates of 0.085, but with a small sample, say 50 people per age group, then there would be almost no evidence for a true difference in the rates (i.e. the data are compatiable with the hypothesis \(p=q\) ).

To address this issue, we need to consider the uncertainty in the estimated rate difference, which is 0.085. Recall that the estimated rate difference is obtained from the sample and therefore is almost certain to deviate somewhat from the true rate difference in the population (which is unknown). Recall from our study of standard errors that the standard error for an estimated proportion is \(\sqrt{p(1-p)/n}\) , where \(p\) is the outcome probability (here the outcome is that a person perceives a high risk of dying), and \(n\) is the sample size.

In the present analysis, we are comparing two proportions, so we have two standard errors. The estimated standard error for the younger people is \(\sqrt{0.11\cdot 0.89/227} \approx 0.021\) . The estimated standard error for the older people is \(\sqrt{0.195\cdot 0.805/154} \approx 0.032\) . Note that both standard errors are estimated, rather than exact, because we are plugging in estimates of the rates (0.11 and 0.195). Also note that the standard error for the rate among older people is greater than that for younger people. This is because the sample size for older people is smaller, and also because the estimated rate for older people is closer to 1/2.

In our previous discussion of standard errors, we saw how standard errors for independent quantities \(A\) and \(B\) can be used to obtain the standard error for the difference \(A-B\) . Applying that result here, we see that the standard error for the estimated difference in rates 0.195-0.11=0.085 is \(\sqrt{0.021^2 + 0.032^2} \approx 0.038\) .

The final step in constructing our test statistic is to construct a Z-score from the estimated difference in rates. As with all Z-scores, we proceed by taking the estimated difference in rates, and then divide it by its standard error. Thus, we get a test statistic value of \(0.085 / 0.038 \approx 2.24\) .

A test statistic value of 2.24 is not very close to zero, so there is some evidence against the null hypothesis. But the strength of this evidence remains unclear. Thus, we must consider how to calibrate this evidence in a way that makes it more interpretable.

Calibrating the evidence in the test statistic #

By the central limit theorem (CLT), a Z-score approximately follows a normal distribution. When the null hypothesis holds, the Z-score approximately follows the standard normal distribution (recall that a standard normal distribution is a normal distribution with expected value equal to 0 and variance equal to 1). If the null hypothesis does not hold, then the test statistic continues to approximately follow a normal distribution, but it is not the standard normal distribution.

A test statistic of zero represents the least possible evidence against the null hypothesis. Here, we will obtain a test statistic of zero when the two proportions being compared are identical, i.e. exactly the same proportions of younger and older people perceive a substantial risk of dying from a disease. Even if the test statistic is exactly zero, this does not guarantee that the null hypothesis is true. However it is the least amount of evidence that the data can present against the null hypothesis.

In a hypothesis testing setting using normally-distrbuted Z-scores, as is the case here (due to the CLT), the standard normal distribution is the reference distribution for our test statistic. If the Z-score falls in the center of the reference distribution, there is no evidence against the null hypothesis. If the Z-score falls into either tail of the reference distribution, then there is evidence against the null distribution, and the further into the tails of the reference distribution the Z-score falls, the greater the evidence.

The most conventional way to quantify the evidence in our test statistic is through a probability called the p-value . The p-value has a somewhat complex definition that many people find difficult to grasp. It is the probability of observing as much or more evidence against the null hypothesis as we actually observe, calculated when the null hypothesis is assumed to be true. We will discuss some ways to think about this more intuitively below.

For our purposes, “evidence against the null hypothesis” is reflected in how far into the tails of the reference distribution the Z-score (test statistic) falls. We observed a test statistic of 2.24 in our COVID risk perception analysis. Recall that due to the “empirical rule”, 95% of the time, a draw from a standard normal distribution falls between -2 and 2. Thus, the p-value must be less than 0.05, since 2.24 falls outside this interval. The p-value can be calculated using a computer, in this case it happens to be approximately 0.025.

As stated above, the p-value tells us how likely it would be for us to obtain as much evidence against the the null hypothesis as we observed in our actual data analysis, if we were certain that the null hypothesis were true. When the null hypothesis holds, any evidence against the null hypothesis is spurious. Thus, we will want to see stronger evidence against the null from our actual analysis than we would see if we know that the null hypothesis were true. A smaller p-value therefore reflects more evidence against the null hypothesis than a larger p-value.

By convention, p-values of 0.05 or smaller are considered to represent sufficiently strong evidence against the null hypothesis to make a finding “statistically significant”. This threshold of 0.05 was chosen arbitrarily 100 years ago, and there is no objective reason for it. In recent years, people have argued that either a lesser or a greater p-value threshold should be used. But largely due to convention, the practice of deeming p-values smaller than 0.05 to be statistically significant continues.

Summary of this example #

Here is a restatement of the above discussion, using slightly different language. In our analysis of COVID risk perceptions, we found a difference in proportions of 0.085 between younger and older subjects, with younger people perceiving a lower risk of dying. This is a difference based on the sample of data that we observed, but what we really want to know is whether there is a difference in COVID risk perception in the population (say, all US adults).

Suppose that in fact there is no difference in risk perception between younger and older people. For instance, suppose that in the population, 15% of people believe that they have a substantial risk of dying should they become infected with the novel coronavirus, regardless of their age. Even though the rates are equal in this imaginary population (both being 15%), the rates in our sample would typically not be equal. Around 3% of the time (0.024=2.4% to be exact), if the rates are actually equal in the population, we would see a test statistic that is 2.4 or larger. Since 3% represents a fairly rare event, we can conclude that our observed data are not compatible with the null hypothesis. We can also say that there is statistically significant evidence against the null hypothesis, and that we have “rejected” the null hypothesis at the 3% level.

In this data analysis, as in any data analysis, we cannot confirm definitively that the alternative hypothesis is true. But based on our data and the analysis performed above, we can claim that there is substantial evidence against the null hypothesis, using standard criteria for what is considered to be “substantial evidence”.

Comparison of means #

A very common setting where hypothesis testing is used arises when we wish to compare the means of a quantitative measurement obtained for two populations. Imagine, for example, that we have two ways of manufacturing a battery, and we wish to assess which approach yields batteries that are longer-lasting in actual use. To do this, suppose we obtain data that tells us the number of charge cycles that were completed in 200 batteries of type A, and in 300 batteries of type B. For the test developed below to be meaningful, the data must be independent and identically distributed samples.

The raw data for this study consists of 500 numbers, but it turns out that the most relevant information from the data is contained in the sample means and sample standard deviations computed within each battery type. Note that this is a huge reduction in complexity, since we started with 500 measurements and are able to summarize this down to just four numbers.

Suppose the summary statistics are as follows, where \(\bar{x}\) , \(\hat{\sigma}_x\) , and \(n\) denote the sample mean, sample standard deviation, and sample size, respectively.

Type \(\bar{x}\) \(\hat{\sigma}_x\) \(n\)
420 70 200
403 90 300

The simplest measure comparing the two manufacturing approaches is the difference 420 - 403 = 17. That is, batteries of type A tend to have 17 more charge cycles compared to batteries of type B. This difference is present in our sample, but is it also true that the entire population of type A batteries has more charge cycles than the entire population of type B batteries? That is the goal of conducting a hypothesis test.

The next step in the present analysis is to divide the mean difference, which is 17, by its standard error. As we have seen, the standard error of the mean, or SEM, is \(\sigma/n\) , where \(\sigma\) is the standard deviation and \(n\) is the sample size. Since \(\sigma\) is almost never known, we plug in its estimate \(\hat{\sigma}\) . For the type A batteries, the estimated SEM is thus \(70/\sqrt{200} \approx 4.95\) , and for the type B batteries the estimated SEM is \(90/\sqrt{300} \approx 5.2\) .

Since we are comparing two estimated means that are obtained from independent samples, we can pool the standard deviations to obtain an overall standard deviation of \(\sqrt{4.95^2 + 5.2^2} \approx 7.18\) . We can now obtain our test statistic \(17/7.18 \approx 2.37\) .

The test statistic can be calibrated against a standard normal reference distribution. The probability of observing a standard normal value that is greater in magnitude than 2.37 is 0.018 (this can be obtained from a computer). This is the p-value, and since it is smaller than the conventional threshold of 0.05, we can claim that there is a statistically significant difference between the average number of charge cycles for the two types of batteries, with the A batteries having more charge cycles on average.

The analysis illustrated here is called a two independent samples Z-test , or just a two sample Z-test . It may be the most commonly employed of all statistical tests. It is also common to see the very similar two sample t-test , which is different only in that it uses the Student t distribution rather than the normal (Gaussian) distribution to calculate the p-values. In fact, there are quite a few minor variations on this testing framework, including “one sided” and “two sided” tests, and tests based on different ways of pooling the variance. Due to the CLT, if the sample size is modestly large (which is the case here), the results of all of these tests will be almost identical. For simplicity, we only cover the Z-test in this course.

Assessment of a correlation #

The tests for comparing proportions and means presented above are quite similar in many ways. To provide one more example of a hypothesis test that is somewhat different, we consider a test for a correlation coefficient.

Recall that the sample correlation coefficient \(\hat{r}\) is used to assess the relationship, or association, between two quantities X and Y that are measured on the same units. For example, we may ask whether two biomarkers, serum creatinine and D-dimer, are correlated with each other. These biomarkers are both commonly used in medical settings and are obtained using blood tests. D-dimer is used to assess whether a person has blood clots, and serum creatinine is used to measure kidney performance.

Suppose we are interested in whether there is a correlation in the population between D-dimer and serum creatinine. The population correlation coefficient between these two quantitites can be denoted \(r\) . Our null hypothesis is \(r=0\) . Suppose that we observe a sample correlation coefficient of \(\hat{r}=0.15\) , using an independent and identically distributed sample of pairs \((x, y)\) , where \(x\) is a D-dimer measurement and \(y\) is a serum creatinine measurement. Are these data consistent with the null hypothesis?

As above, we proceed by constructing a test statistic by taking the estimated statistic and dividing it by its standard error. The approximate standard error for \(\hat{r}\) is \(1/\sqrt{n}\) , where \(n\) is the sample size. The test statistic is therefore \(\sqrt{n}\cdot \hat{r} \approx 1.48\) .

We now calibrate this test statistic by comparing it to a standard normal reference distribution. Recall from the empirical rule that 5% of the time, a standard normal value falls outside the interval (-2, 2). Therefore, if the test statistic is smaller than 2 in magnitude, as is the case here, its p-value is greater than 0.05. Thus, in this case we know that the p-value will exceed 0.05 without calculating it, and therefore there is no basis for claiming that D-dimer and serum creatinine levels are correlated in this population.

Sampling properties of p-values #

A p-value is the most common way of calibrating evidence. Smaller p-values indicate stronger evidence against a null hypothesis. By convention, if the p-value is smaller than some threshold, usually 0.05, we reject the null hypothesis and declare a finding to be “statistically significant”. How can we understand more deeply what this means? One major concern should be obtaining a small p-value when the null hypothesis is true. If the null hypothesis is true, then it is incorrect to reject it. If we reject the null hypothesis, we are making a false claim. This can never be prevented with complete certainty, but we would like to have a very clear understanding of how likely it is to reject the null hypothesis when the null hypothesis is in fact true.

P-values have a special property that when the null distribution is true, the probability of observing a p-value smaller than 0.05 is 0.05 (5%). In fact, the probability of observing a p-value smaller than \(t\) is equal to \(t\) , for any threshold \(t\) . For example, the probability of observing a p-value smaller than 0.1, when the null hypothesis is true, is 10%.

This fact gives a more concrete understanding of how strong the evidence is for a particular p-value. If we always reject the null hypothesis when the p-value is 0.1 or smaller, then over the long run we will reject the null hypothesis 10% of the time when the null hypothesis is true. If we always reject the null hypothesis when the p-value is 0.05 or smaller, then over the long run we will reject the null hypothesis 5% of the time when the null hypothesis is true.

The approach to hypothesis testing discussed above largely follows the framework developed by RA Fisher around 1925. Note that although we mentioned the alternative hypothesis above, we never actually used it. A more elaborate approach to hypothesis testing was developed somewhat later by Egon Pearson and Jerzy Neyman. The “Neyman-Pearson” approach to hypothesis testing is even more formal than Fisher’s approach, and is most suited to highly planned research efforts in which the study is carefully designed, then executed. While ideally all research projects should be carried out this way, in reality we often conduct research using data that are already available, rather than using data that are specifically collected to address the research question.

Neyman-Pearson hypothesis testing involves specifying an alternative hypothesis that we anticipate encountering. Usually this alternative hypothesis represents a realistic guess about what we might find once the data are collected. In each of the three examples above, imagine that the data are not yet collected, and we are asked to specify an alternative hypothesis. We may arrive at the following:

In comparing risk perceptions for COVID, we may anticipate that older people will perceive a 30% risk of dying, and younger people will anticipate a 5% risk of dying.

In comparing the number of charge cycles for two types of batteries, we may anticipate that batter type A will have on average 500 charge cycles, and battery type B will have on average 400 charge cycles.

In assessing the correlation between D-dimer and serum creatinine levels, we may anticipate a correlation of 0.3.

Note that none of the numbers stated here are data-driven – they are specified before any data are collected, so they do not match the results from the data, which were collected only later. These alternative hypotheses are all essentially speculations, based perhaps on related data or theoretical considerations.

There are several benefits of specifying an explicit alternative hypothesis, as done here, even though it is not strictly necessary and can be avoided entirely by adopting Fisher’s approach to hypothesis testing. One benefit of specifying an alternative hypothesis is that we can use it to assess the power of our planned study, which can in turn inform the design of the study, in particular the sample size. The power is the probability of rejecting the null hypothesis when the alternative hypothesis is true. That is, it is the probability of discovering something real. The power should be contrasted with the level of a hypothesis test, which is the probability of rejecting the null hypothesis when the null hypothesis is true. That is, the level is the probability of “discovering” something that is not real.

To calculate the power, recall that for many of the test statistics that we are considering here, the test statistic has the form \(\hat{\theta}/{\rm SE}(\hat{\theta})\) , where \(\hat{\theta}\) is an estimate. For example, \(\hat{\theta}\) ) may be the correlation coefficient between D-dimer and serum creatinine levels. As stated above, the power is the probability of rejecting the null hypothesis when the alternative hypothesis is true. Suppose we decide to reject the null hypothesis when the test statistic is greater than 2, which is approximately equivalent to rejecting the null hypothesis when the p-value is less than 0.05. The following calculation tells us how to obtain the power in this setting:

Under the alternative hypothesis, \(\sqrt{n}(\hat{r} - r)\) approximately follows a standard normal distribution. Therefore, if \(r\) and \(n\) are given, we can easily use the computer to obtain the probability of observing a value greater than \(2 - \sqrt{n}r\) . This gives us the power of the test. For example, if we anticipate \(r=0.3\) and plan to collect data for \(n=100\) observations, the power is 0.84. This is generally considered to be good power – if the true value of \(r\) is in fact 0.3, we would reject the null hypothesis 84% of the time.

A study usually has poor power because it has too small of a sample size. Poorly powered studies can be very misleading, but since large sample sizes are expensive to collect, a lot of research is conducted using sample sizes that yield moderate or even low power. If a study has low power, it is unlikely to reject the null hypothesis even when the alternative hypothesis is true, but it remains possible to reject the null hypothesis when the null hypothesis is true (usually this probability is 5%). Therefore the most likely outcome of a poorly powered study may be an incorrectly rejected null hypothesis.

Tutorial Playlist

Statistics tutorial, everything you need to know about the probability density function in statistics, the best guide to understand central limit theorem, an in-depth guide to measures of central tendency : mean, median and mode, the ultimate guide to understand conditional probability.

A Comprehensive Look at Percentile in Statistics

The Best Guide to Understand Bayes Theorem

Everything you need to know about the normal distribution, an in-depth explanation of cumulative distribution function, a complete guide to chi-square test, what is hypothesis testing in statistics types and examples, understanding the fundamentals of arithmetic and geometric progression, the definitive guide to understand spearman’s rank correlation, mean squared error: overview, examples, concepts and more, all you need to know about the empirical rule in statistics, the complete guide to skewness and kurtosis, a holistic look at bernoulli distribution.

All You Need to Know About Bias in Statistics

A Complete Guide to Get a Grasp of Time Series Analysis

The Key Differences Between Z-Test Vs. T-Test

The Complete Guide to Understand Pearson's Correlation

A complete guide on the types of statistical studies, everything you need to know about poisson distribution, your best guide to understand correlation vs. regression, the most comprehensive guide for beginners on what is correlation, what is hypothesis testing in statistics types and examples.

Lesson 10 of 24 By Avijeet Biswal

What Is Hypothesis Testing in Statistics? Types and Examples

Table of Contents

In today’s data-driven world, decisions are based on data all the time. Hypothesis plays a crucial role in that process, whether it may be making business decisions, in the health sector, academia, or in quality improvement. Without hypothesis & hypothesis tests, you risk drawing the wrong conclusions and making bad decisions. In this tutorial, you will look at Hypothesis Testing in Statistics.

The Ultimate Ticket to Top Data Science Job Roles

The Ultimate Ticket to Top Data Science Job Roles

What Is Hypothesis Testing in Statistics?

Hypothesis Testing is a type of statistical analysis in which you put your assumptions about a population parameter to the test. It is used to estimate the relationship between 2 statistical variables.

Let's discuss few examples of statistical hypothesis from real-life - 

  • A teacher assumes that 60% of his college's students come from lower-middle-class families.
  • A doctor believes that 3D (Diet, Dose, and Discipline) is 90% effective for diabetic patients.

Now that you know about hypothesis testing, look at the two types of hypothesis testing in statistics.

Hypothesis Testing Formula

Z = ( x̅ – μ0 ) / (σ /√n)

  • Here, x̅ is the sample mean,
  • μ0 is the population mean,
  • σ is the standard deviation,
  • n is the sample size.

How Hypothesis Testing Works?

An analyst performs hypothesis testing on a statistical sample to present evidence of the plausibility of the null hypothesis. Measurements and analyses are conducted on a random sample of the population to test a theory. Analysts use a random population sample to test two hypotheses: the null and alternative hypotheses.

The null hypothesis is typically an equality hypothesis between population parameters; for example, a null hypothesis may claim that the population means return equals zero. The alternate hypothesis is essentially the inverse of the null hypothesis (e.g., the population means the return is not equal to zero). As a result, they are mutually exclusive, and only one can be correct. One of the two possibilities, however, will always be correct.

Your Dream Career is Just Around The Corner!

Your Dream Career is Just Around The Corner!

Null Hypothesis and Alternate Hypothesis

The Null Hypothesis is the assumption that the event will not occur. A null hypothesis has no bearing on the study's outcome unless it is rejected.

H0 is the symbol for it, and it is pronounced H-naught.

The Alternate Hypothesis is the logical opposite of the null hypothesis. The acceptance of the alternative hypothesis follows the rejection of the null hypothesis. H1 is the symbol for it.

Let's understand this with an example.

A sanitizer manufacturer claims that its product kills 95 percent of germs on average. 

To put this company's claim to the test, create a null and alternate hypothesis.

H0 (Null Hypothesis): Average = 95%.

Alternative Hypothesis (H1): The average is less than 95%.

Another straightforward example to understand this concept is determining whether or not a coin is fair and balanced. The null hypothesis states that the probability of a show of heads is equal to the likelihood of a show of tails. In contrast, the alternate theory states that the probability of a show of heads and tails would be very different.

Become a Data Scientist with Hands-on Training!

Become a Data Scientist with Hands-on Training!

Hypothesis Testing Calculation With Examples

Let's consider a hypothesis test for the average height of women in the United States. Suppose our null hypothesis is that the average height is 5'4". We gather a sample of 100 women and determine that their average height is 5'5". The standard deviation of population is 2.

To calculate the z-score, we would use the following formula:

z = ( x̅ – μ0 ) / (σ /√n)

z = (5'5" - 5'4") / (2" / √100)

z = 0.5 / (0.045)

We will reject the null hypothesis as the z-score of 11.11 is very large and conclude that there is evidence to suggest that the average height of women in the US is greater than 5'4".

Steps of Hypothesis Testing

Hypothesis testing is a statistical method to determine if there is enough evidence in a sample of data to infer that a certain condition is true for the entire population. Here’s a breakdown of the typical steps involved in hypothesis testing:

Formulate Hypotheses

  • Null Hypothesis (H0): This hypothesis states that there is no effect or difference, and it is the hypothesis you attempt to reject with your test.
  • Alternative Hypothesis (H1 or Ha): This hypothesis is what you might believe to be true or hope to prove true. It is usually considered the opposite of the null hypothesis.

Choose the Significance Level (α)

The significance level, often denoted by alpha (α), is the probability of rejecting the null hypothesis when it is true. Common choices for α are 0.05 (5%), 0.01 (1%), and 0.10 (10%).

Select the Appropriate Test

Choose a statistical test based on the type of data and the hypothesis. Common tests include t-tests, chi-square tests, ANOVA, and regression analysis. The selection depends on data type, distribution, sample size, and whether the hypothesis is one-tailed or two-tailed.

Collect Data

Gather the data that will be analyzed in the test. This data should be representative of the population to infer conclusions accurately.

Calculate the Test Statistic

Based on the collected data and the chosen test, calculate a test statistic that reflects how much the observed data deviates from the null hypothesis.

Determine the p-value

The p-value is the probability of observing test results at least as extreme as the results observed, assuming the null hypothesis is correct. It helps determine the strength of the evidence against the null hypothesis.

Make a Decision

Compare the p-value to the chosen significance level:

  • If the p-value ≤ α: Reject the null hypothesis, suggesting sufficient evidence in the data supports the alternative hypothesis.
  • If the p-value > α: Do not reject the null hypothesis, suggesting insufficient evidence to support the alternative hypothesis.

Report the Results

Present the findings from the hypothesis test, including the test statistic, p-value, and the conclusion about the hypotheses.

Perform Post-hoc Analysis (if necessary)

Depending on the results and the study design, further analysis may be needed to explore the data more deeply or to address multiple comparisons if several hypotheses were tested simultaneously.

Types of Hypothesis Testing

To determine whether a discovery or relationship is statistically significant, hypothesis testing uses a z-test. It usually checks to see if two means are the same (the null hypothesis). Only when the population standard deviation is known and the sample size is 30 data points or more, can a z-test be applied.

A statistical test called a t-test is employed to compare the means of two groups. To determine whether two groups differ or if a procedure or treatment affects the population of interest, it is frequently used in hypothesis testing.

Chi-Square 

You utilize a Chi-square test for hypothesis testing concerning whether your data is as predicted. To determine if the expected and observed results are well-fitted, the Chi-square test analyzes the differences between categorical variables from a random sample. The test's fundamental premise is that the observed values in your data should be compared to the predicted values that would be present if the null hypothesis were true.

Hypothesis Testing and Confidence Intervals

Both confidence intervals and hypothesis tests are inferential techniques that depend on approximating the sample distribution. Data from a sample is used to estimate a population parameter using confidence intervals. Data from a sample is used in hypothesis testing to examine a given hypothesis. We must have a postulated parameter to conduct hypothesis testing.

Bootstrap distributions and randomization distributions are created using comparable simulation techniques. The observed sample statistic is the focal point of a bootstrap distribution, whereas the null hypothesis value is the focal point of a randomization distribution.

A variety of feasible population parameter estimates are included in confidence ranges. In this lesson, we created just two-tailed confidence intervals. There is a direct connection between these two-tail confidence intervals and these two-tail hypothesis tests. The results of a two-tailed hypothesis test and two-tailed confidence intervals typically provide the same results. In other words, a hypothesis test at the 0.05 level will virtually always fail to reject the null hypothesis if the 95% confidence interval contains the predicted value. A hypothesis test at the 0.05 level will nearly certainly reject the null hypothesis if the 95% confidence interval does not include the hypothesized parameter.

Become a Data Scientist through hands-on learning with hackathons, masterclasses, webinars, and Ask-Me-Anything! Start learning now!

Simple and Composite Hypothesis Testing

Depending on the population distribution, you can classify the statistical hypothesis into two types.

Simple Hypothesis: A simple hypothesis specifies an exact value for the parameter.

Composite Hypothesis: A composite hypothesis specifies a range of values.

A company is claiming that their average sales for this quarter are 1000 units. This is an example of a simple hypothesis.

Suppose the company claims that the sales are in the range of 900 to 1000 units. Then this is a case of a composite hypothesis.

One-Tailed and Two-Tailed Hypothesis Testing

The One-Tailed test, also called a directional test, considers a critical region of data that would result in the null hypothesis being rejected if the test sample falls into it, inevitably meaning the acceptance of the alternate hypothesis.

In a one-tailed test, the critical distribution area is one-sided, meaning the test sample is either greater or lesser than a specific value.

In two tails, the test sample is checked to be greater or less than a range of values in a Two-Tailed test, implying that the critical distribution area is two-sided.

If the sample falls within this range, the alternate hypothesis will be accepted, and the null hypothesis will be rejected.

Become a Data Scientist With Real-World Experience

Become a Data Scientist With Real-World Experience

Right Tailed Hypothesis Testing

If the larger than (>) sign appears in your hypothesis statement, you are using a right-tailed test, also known as an upper test. Or, to put it another way, the disparity is to the right. For instance, you can contrast the battery life before and after a change in production. Your hypothesis statements can be the following if you want to know if the battery life is longer than the original (let's say 90 hours):

  • The null hypothesis is (H0 <= 90) or less change.
  • A possibility is that battery life has risen (H1) > 90.

The crucial point in this situation is that the alternate hypothesis (H1), not the null hypothesis, decides whether you get a right-tailed test.

Left Tailed Hypothesis Testing

Alternative hypotheses that assert the true value of a parameter is lower than the null hypothesis are tested with a left-tailed test; they are indicated by the asterisk "<".

Suppose H0: mean = 50 and H1: mean not equal to 50

According to the H1, the mean can be greater than or less than 50. This is an example of a Two-tailed test.

In a similar manner, if H0: mean >=50, then H1: mean <50

Here the mean is less than 50. It is called a One-tailed test.

Type 1 and Type 2 Error

A hypothesis test can result in two types of errors.

Type 1 Error: A Type-I error occurs when sample results reject the null hypothesis despite being true.

Type 2 Error: A Type-II error occurs when the null hypothesis is not rejected when it is false, unlike a Type-I error.

Suppose a teacher evaluates the examination paper to decide whether a student passes or fails.

H0: Student has passed

H1: Student has failed

Type I error will be the teacher failing the student [rejects H0] although the student scored the passing marks [H0 was true]. 

Type II error will be the case where the teacher passes the student [do not reject H0] although the student did not score the passing marks [H1 is true].

Level of Significance

The alpha value is a criterion for determining whether a test statistic is statistically significant. In a statistical test, Alpha represents an acceptable probability of a Type I error. Because alpha is a probability, it can be anywhere between 0 and 1. In practice, the most commonly used alpha values are 0.01, 0.05, and 0.1, which represent a 1%, 5%, and 10% chance of a Type I error, respectively (i.e. rejecting the null hypothesis when it is in fact correct).

A p-value is a metric that expresses the likelihood that an observed difference could have occurred by chance. As the p-value decreases the statistical significance of the observed difference increases. If the p-value is too low, you reject the null hypothesis.

Here you have taken an example in which you are trying to test whether the new advertising campaign has increased the product's sales. The p-value is the likelihood that the null hypothesis, which states that there is no change in the sales due to the new advertising campaign, is true. If the p-value is .30, then there is a 30% chance that there is no increase or decrease in the product's sales.  If the p-value is 0.03, then there is a 3% probability that there is no increase or decrease in the sales value due to the new advertising campaign. As you can see, the lower the p-value, the chances of the alternate hypothesis being true increases, which means that the new advertising campaign causes an increase or decrease in sales.

Our Data Scientist Master's Program covers core topics such as R, Python, Machine Learning, Tableau, Hadoop, and Spark. Get started on your journey today!

Why Is Hypothesis Testing Important in Research Methodology?

Hypothesis testing is crucial in research methodology for several reasons:

  • Provides evidence-based conclusions: It allows researchers to make objective conclusions based on empirical data, providing evidence to support or refute their research hypotheses.
  • Supports decision-making: It helps make informed decisions, such as accepting or rejecting a new treatment, implementing policy changes, or adopting new practices.
  • Adds rigor and validity: It adds scientific rigor to research using statistical methods to analyze data, ensuring that conclusions are based on sound statistical evidence.
  • Contributes to the advancement of knowledge: By testing hypotheses, researchers contribute to the growth of knowledge in their respective fields by confirming existing theories or discovering new patterns and relationships.

When Did Hypothesis Testing Begin?

Hypothesis testing as a formalized process began in the early 20th century, primarily through the work of statisticians such as Ronald A. Fisher, Jerzy Neyman, and Egon Pearson. The development of hypothesis testing is closely tied to the evolution of statistical methods during this period.

  • Ronald A. Fisher (1920s): Fisher was one of the key figures in developing the foundation for modern statistical science. In the 1920s, he introduced the concept of the null hypothesis in his book "Statistical Methods for Research Workers" (1925). Fisher also developed significance testing to examine the likelihood of observing the collected data if the null hypothesis were true. He introduced p-values to determine the significance of the observed results.
  • Neyman-Pearson Framework (1930s): Jerzy Neyman and Egon Pearson built on Fisher’s work and formalized the process of hypothesis testing even further. In the 1930s, they introduced the concepts of Type I and Type II errors and developed a decision-making framework widely used in hypothesis testing today. Their approach emphasized the balance between these errors and introduced the concepts of the power of a test and the alternative hypothesis.

The dialogue between Fisher's and Neyman-Pearson's approaches shaped the methods and philosophy of statistical hypothesis testing used today. Fisher emphasized the evidential interpretation of the p-value. At the same time, Neyman and Pearson advocated for a decision-theoretical approach in which hypotheses are either accepted or rejected based on pre-determined significance levels and power considerations.

The application and methodology of hypothesis testing have since become a cornerstone of statistical analysis across various scientific disciplines, marking a significant statistical development.

Limitations of Hypothesis Testing

Hypothesis testing has some limitations that researchers should be aware of:

  • It cannot prove or establish the truth: Hypothesis testing provides evidence to support or reject a hypothesis, but it cannot confirm the absolute truth of the research question.
  • Results are sample-specific: Hypothesis testing is based on analyzing a sample from a population, and the conclusions drawn are specific to that particular sample.
  • Possible errors: During hypothesis testing, there is a chance of committing type I error (rejecting a true null hypothesis) or type II error (failing to reject a false null hypothesis).
  • Assumptions and requirements: Different tests have specific assumptions and requirements that must be met to accurately interpret results.

Learn All The Tricks Of The BI Trade

Learn All The Tricks Of The BI Trade

After reading this tutorial, you would have a much better understanding of hypothesis testing, one of the most important concepts in the field of Data Science . The majority of hypotheses are based on speculation about observed behavior, natural phenomena, or established theories.

If you are interested in statistics of data science and skills needed for such a career, you ought to explore the Post Graduate Program in Data Science.

If you have any questions regarding this ‘Hypothesis Testing In Statistics’ tutorial, do share them in the comment section. Our subject matter expert will respond to your queries. Happy learning!

1. What is hypothesis testing in statistics with example?

Hypothesis testing is a statistical method used to determine if there is enough evidence in a sample data to draw conclusions about a population. It involves formulating two competing hypotheses, the null hypothesis (H0) and the alternative hypothesis (Ha), and then collecting data to assess the evidence. An example: testing if a new drug improves patient recovery (Ha) compared to the standard treatment (H0) based on collected patient data.

2. What is H0 and H1 in statistics?

In statistics, H0​ and H1​ represent the null and alternative hypotheses. The null hypothesis, H0​, is the default assumption that no effect or difference exists between groups or conditions. The alternative hypothesis, H1​, is the competing claim suggesting an effect or a difference. Statistical tests determine whether to reject the null hypothesis in favor of the alternative hypothesis based on the data.

3. What is a simple hypothesis with an example?

A simple hypothesis is a specific statement predicting a single relationship between two variables. It posits a direct and uncomplicated outcome. For example, a simple hypothesis might state, "Increased sunlight exposure increases the growth rate of sunflowers." Here, the hypothesis suggests a direct relationship between the amount of sunlight (independent variable) and the growth rate of sunflowers (dependent variable), with no additional variables considered.

4. What are the 2 types of hypothesis testing?

  • One-tailed (or one-sided) test: Tests for the significance of an effect in only one direction, either positive or negative.
  • Two-tailed (or two-sided) test: Tests for the significance of an effect in both directions, allowing for the possibility of a positive or negative effect.

The choice between one-tailed and two-tailed tests depends on the specific research question and the directionality of the expected effect.

5. What are the 3 major types of hypothesis?

The three major types of hypotheses are:

  • Null Hypothesis (H0): Represents the default assumption, stating that there is no significant effect or relationship in the data.
  • Alternative Hypothesis (Ha): Contradicts the null hypothesis and proposes a specific effect or relationship that researchers want to investigate.
  • Nondirectional Hypothesis: An alternative hypothesis that doesn't specify the direction of the effect, leaving it open for both positive and negative possibilities.

Find our PL-300 Microsoft Power BI Certification Training Online Classroom training classes in top cities:

NameDatePlace
24 Aug -8 Sep 2024,
Weekend batch
Your City
7 Sep -22 Sep 2024,
Weekend batch
Your City
21 Sep -6 Oct 2024,
Weekend batch
Your City

About the Author

Avijeet Biswal

Avijeet is a Senior Research Analyst at Simplilearn. Passionate about Data Analytics, Machine Learning, and Deep Learning, Avijeet is also interested in politics, cricket, and football.

Recommended Resources

The Key Differences Between Z-Test Vs. T-Test

Free eBook: Top Programming Languages For A Data Scientist

Normality Test in Minitab: Minitab with Statistics

Normality Test in Minitab: Minitab with Statistics

A Comprehensive Look at Percentile in Statistics

Machine Learning Career Guide: A Playbook to Becoming a Machine Learning Engineer

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

Grab your spot at the free arXiv Accessibility Forum

Help | Advanced Search

Statistics > Methodology

Title: hypothesis testing for general network models.

Abstract: The network data has attracted considerable attention in modern statistics. In research on complex network data, one key issue is finding its underlying connection structure given a network sample. The methods that have been proposed in literature usually assume that the underlying structure is a known model. In practice, however, the true model is usually unknown, and network learning procedures based on these methods may suffer from model misspecification. To handle this issue, based on the random matrix theory, we first give a spectral property of the normalized adjacency matrix under a mild condition. Further, we establish a general goodness-of-fit test procedure for the unweight and undirected network. We prove that the null distribution of the proposed statistic converges in distribution to the standard normal distribution. Theoretically, this testing procedure is suitable for nearly all popular network models, such as stochastic block models, and latent space models. Further, we apply the proposed method to the degree-corrected mixed membership model and give a sequential estimator of the number of communities. Both simulation studies and real-world data examples indicate that the proposed method works well.
Subjects: Methodology (stat.ME)
Cite as: [stat.ME]
  (or [stat.ME] for this version)
  Focus to learn more arXiv-issued DOI via DataCite

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

sustainability-logo

Article Menu

data science hypothesis testing

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Do directors’ network positions affect corporate fraud.

data science hypothesis testing

1. Introduction

2. theoretical analysis and hypotheses, 2.1. directors’ network positions and corporate fraud, 2.2. the mediating role of corporate internal control, 2.3. the mediating role of external auditing, 3. data and methodology, 3.1. data and sample selection, 3.2. variable measurement, 3.2.1. dependent variable, 3.2.2. independent variable, 3.2.3. mediating variable, 3.2.4. control variable, 3.3. model setting, 3.3.1. basic regression model, 3.3.2. modeling the mediating effects of internal controls, 3.3.3. modeling the mediating effect of external audit, 4. empirical analysis, 4.1. descriptive statistics, 4.2. basic regression results, 4.3. mechanism test and results, 4.4. endogeneity test, 4.5. robustness test, 5. further analysis, 6. discussion and conclusions, author contributions, institutional review board statement, informed consent statement, data availability statement, conflicts of interest.

  • Eugster, N.; Kowalewski, O.; Śpiewanowski, P. Internal Governance Mechanisms and Corporate Misconduct. Int. Rev. Financ. Anal. 2024 , 92 , 103109. [ Google Scholar ] [ CrossRef ]
  • Beasley, M.S. An Empirical Analysis of the Relation between the Board of Director Composition and Financial Statement Fraud. Account. Rev. 1996 , 71 , 443–465. [ Google Scholar ]
  • Liu, Y.; Wei, Z.; Xie, F. Do Women Directors Improve Firm Performance in China? J. Corp. Financ. 2014 , 28 , 169–184. [ Google Scholar ] [ CrossRef ]
  • Ghafran, C.; O’Sullivan, N. The Governance Role of Audit Committees: Reviewing a Decade of Evidence. Int. J. Manag. Rev. 2013 , 15 , 381–407. [ Google Scholar ] [ CrossRef ]
  • DeFond, M.L.; Francis, J.R. Audit Research after Sarbanes-Oxley. Audit. J. Pract. Theory 2005 , 24 , 5–30. [ Google Scholar ] [ CrossRef ]
  • Povel, P.; Singh, R.; Winton, A. Booms, busts, and fraud. Rev. Financ. Stud. 2007 , 20 , 1219–1254. [ Google Scholar ] [ CrossRef ]
  • Wang, T.Y.; Winton, A.; Yu, X. Corporate fraud and business conditions: Evidence from IPOs. J. Financ. 2010 , 65 , 2255–2292. [ Google Scholar ] [ CrossRef ]
  • Denis, D.K.; Denis, D.J.; Sarin, A. Ownership Structure and Top Executive Turnover. J. Financ. Econ. 1997 , 45 , 193–221. [ Google Scholar ] [ CrossRef ]
  • Chung, R.; Firth, M.; Kim, J.B. Institutional Monitoring and Opportunistic Earnings Management. J. Corp. Financ. 2002 , 8 , 29–48. [ Google Scholar ] [ CrossRef ]
  • Khanna, V.; Kim, E.H.; Lu, Y. CEO Connectedness and Corporate Fraud. J. Financ. 2015 , 70 , 1203–1252. [ Google Scholar ] [ CrossRef ]
  • Arlen, J.; Carney, W.J. Vicarious Liability for Fraud on Securities Markets: Theory and Evidence. Univ. Ill. Law Rev. 1992 , 691–746. [ Google Scholar ]
  • Bakke, T.E.; Black, J.R.; Mahmudi, H.; Linn, S.C. Director networks and firm value. J. Corp. Financ. 2024 , 85 , 102545. [ Google Scholar ] [ CrossRef ]
  • Chang, C.-H.; Wu, Q. Board Networks and Corporate Innovation. Manag. Sci. 2021 , 67 , 3618–3654. [ Google Scholar ] [ CrossRef ]
  • Nandy, M.; Lodh, S.; Kaur, J.; Wang, J. Impact of directors’ networks on corporate social responsibility: A cross country study. Int. Rev. Financ. Anal. 2020 , 72 , 101601. [ Google Scholar ] [ CrossRef ]
  • Schabus, M. Do director networks help managers forecast better? Account. Rev. 2022 , 97 , 397–426. [ Google Scholar ] [ CrossRef ]
  • Xing, J.; Zhang, Y.; Xiong, X.; Li, G. Covering or Monitoring? Independent Director Connectedness and Corporate Fraud in China. China J. Account. Res. 2022 , 15 , 100273. [ Google Scholar ] [ CrossRef ]
  • Core, J.E.; Holthausen, R.W.; Larcker, D.F. Corporate Governance, Chief Executive Officer Compensation, and Firm Performance. J. Financ. Econ. 1999 , 51 , 371–406. [ Google Scholar ] [ CrossRef ]
  • Fich, E.M.; Shivdasani, A. Are Busy Boards Effective Monitors? J. Financ. 2006 , 61 , 689–724. [ Google Scholar ] [ CrossRef ]
  • Kuang, Y.F.; Lee, G. Corporate fraud and external social connectedness of independent directors. J. Corp. Financ. 2017 , 45 , 401–427. [ Google Scholar ] [ CrossRef ]
  • Coles, J.L.; Li, Z.F.; Wang, A.Y. A Model of Industry Tournament Incentives (30 January 2020). Available online: https://ssrn.com/abstract=3528738 (accessed on 4 April 2018).
  • Fama, E.F. Agency Problems and the Theory of the Firm. J. Political Econ. 1980 , 88 , 288–307. [ Google Scholar ] [ CrossRef ]
  • Fombrun, C.J. Reputation ; Harvard Business School Press: Brighton, MA, USA, 1996. [ Google Scholar ]
  • Grey, E.; Balmer, J. Managing corporate image and corporate reputation. Long Range Plan. 1998 , 31 , 695–702. [ Google Scholar ] [ CrossRef ]
  • Shane, S.; Cable, D. Network ties, reputation and the financing of new ventures. Manag. Sci. 2002 , 48 , 364–381. [ Google Scholar ] [ CrossRef ]
  • Krackhardt, D. The Strength of Strong Ties: The Importance of Philos in Organizations. In Networks in the Knowledge Economy ; Oxford University Press: Oxford, UK, 2003; ISBN 978-0-19-515950-9. [ Google Scholar ]
  • Wang, Y.; Zhang, G. Director networks and firm innovation: Attracting capital and wisdom. J. Financ. Res. 2018 , 49 , 189–206. [ Google Scholar ]
  • Xie, D.; Chen, Y. Director networks: Definition, characterization and measurement. J. Account. Res. 2012 , 3 , 44–51, 95. [ Google Scholar ]
  • Cashman, G.D.; Gillan, S.L.; Jun, C. Going Overboard? On Busy Directors and Firm Value. J. Bank Financ. 2012 , 36 , 3248–3259. [ Google Scholar ] [ CrossRef ]
  • Duchin, R.; Matsusaka, J.G.; Ozbas, O. When Are Outside Directors Effective? J. Financ. Econ. 2010 , 96 , 195–214. [ Google Scholar ] [ CrossRef ]
  • Larcker, D.F.; So, E.C.; Wang, C.C.Y. Boardroom centrality and firm performance. J. Account. Econ. 2013 , 55 , 225–250. [ Google Scholar ] [ CrossRef ]
  • Hillman, A.J.; Dalziel, T. Boards of directors and firm performance: Integrating agency and resource dependence perspectives. Acad. Manag. Rev. 2003 , 28 , 383–396. [ Google Scholar ] [ CrossRef ]
  • Coles, J.L.; Daniel, N.D.; Naveen, L. Co-Opted Boards. Rev. Einanc. Stud. 2014 , 27 , 1751–1796. [ Google Scholar ] [ CrossRef ]
  • Ferris, S.P.; Jagannathan, M.; Pritchard, A.C. Too Busy to Mind the Business? Monitoring by Directors with Multiple Board Appointments. J. Financ. 2003 , 58 , 1087–1111. [ Google Scholar ] [ CrossRef ]
  • Andres, C.; Lehmann, M. Is Busy Really Busy? Board Governance Revisited. J. Bus. Financ. Account. 2013 , 40 , 1221–1246. [ Google Scholar ] [ CrossRef ]
  • Faleye, O. Classified Boards, Firm Value, and Managerial Entrenchment. J. Financ. Econ. 2007 , 83 , 501–529. [ Google Scholar ] [ CrossRef ]
  • Ahn, S.; Jiraporn, P.; Kim, Y.S. Multiple Directorships and Acquirer Returns. J. Bank. Financ. 2010 , 34 , 2011–2026. [ Google Scholar ] [ CrossRef ]
  • Yang, X.; Li, F.; Zhao, Y. Internal control spillovers of chain directors—A perspective based on internal control deficiencies. J. Audit. Res. 2022 , 3 , 117–128. [ Google Scholar ]
  • Cheng, S.; Felix, R.; Indjejikian, R. Spillover effects of internal control weakness disclosures: The role of audit committees and board connections. J. Contemp. Account. Res. 2019 , 36 , 934–957. [ Google Scholar ] [ CrossRef ]
  • Doyle, J.T.; Ge, W.; McVay, S. Determinants of weaknesses in internal control over financial reporting. J. Account. Econ. 2007 , 44 , 193–223. [ Google Scholar ] [ CrossRef ]
  • Jiang, W.; Lee, P.; Anandarajan, A. The association between corporate governance and earnings quality: Further evidence using the GOV-Score. Adv. Account. 2008 , 24 , 191–201. [ Google Scholar ] [ CrossRef ]
  • Ashbaugh-Skaife, H.; Collins, D.W.; Kinney, W.R. The discovery and reporting of internal control deficiencies prior to SOX-mandated audits. J. Account. Econ. 2007 , 44 , 166–192. [ Google Scholar ] [ CrossRef ]
  • Fan, J.P.; Wong, T.J. Do External Auditors Perform a Corporate Governance Role in Emerging Markets? Evidence from East Asia. J. Account. Res. 2005 , 43 , 35–72. [ Google Scholar ] [ CrossRef ]
  • Chen, H.; Chen, J.Z.; Lobo, G.J.; Wang, Y. Effects of Audit Quality on Earnings Management and Cost of Equity Capital: Evidence from China. Contemp. Account. Res. 2011 , 28 , 892–925. [ Google Scholar ] [ CrossRef ]
  • Kausar, A.; Shroff, N.; White, H. Real Effects of the Audit Choice. J. Account. Econ. 2016 , 62 , 157–181. [ Google Scholar ] [ CrossRef ]
  • Xing, Q. Do Independent Directors Influence Auditor Selection- An Examination Based on the Perspective of Directors’ Network. Account. Res. 2018 , 7 , 79–85. [ Google Scholar ]
  • Newman, M.E.J. The structure of scientific collaboration networks. Proc. Natl. Acad. Sci. USA 2001 , 98 , 404–409. [ Google Scholar ] [ CrossRef ]
  • Guillaume, J.-L.; Latapy, M. Bipartite Structure of All Complex Networks. Inf. Process. Lett. 2004 , 90 , 215–221. [ Google Scholar ] [ CrossRef ]
  • Chen, D.; Lü, L.; Shang, M.-S.; Zhang, Y.-C.; Zhou, T. Identifying Influential Nodes in Complex Networks. Phys. Stat. Mech. Its Appl. 2012 , 391 , 1777–1787. [ Google Scholar ] [ CrossRef ]
  • Feng, H.; Zhang, Z.; Wang, Q.; Yang, L. Does a Company’ s Position within the Interlocking Director Network Influence Its ESG Performance?- Empirical Evidence from Chinese Listed Companies. Sustainability 2024 , 16 , 4190. [ Google Scholar ] [ CrossRef ]
  • Hu, Y.; Li, Z.; Guo, J. Does Independent Directors’ Interlocking Network Position Affect Green Innovation? Sustainability 2024 , 16 , 1089. [ Google Scholar ] [ CrossRef ]
  • Hu, H.; Guo, S.; Dou, B.; Wang, A. Chain director network and Chinese listed company violations. Econ. Manag. Stud. 2022 , 43 , 62–88. [ Google Scholar ]
  • Intintoli, V.J.; Kahle, K.M.; Zhao, W. Director connectedness: Monitoring efficacy and career prospects. J. Financ. Quant. Anal. 2018 , 53 , 65–108. [ Google Scholar ] [ CrossRef ]
  • Arnaboldi, F.; Casu, B.; Gallo, A.; Kalotychou, E.; Sarkisyan, A. Gender Diversity and Bank Misconduct. J. Corp. Financ. 2021 , 71 , 101834. [ Google Scholar ] [ CrossRef ]
  • Liang, S.; Fan, Y.; Zhang, H. Director network linkage and corporate foreign donations. J. Manag. Eng. 2022 , 36 , 62–74. [ Google Scholar ]
  • Lin, Z.; Xin, M. Director network location and the efficiency of corporate financial asset investment. Account. Res. 2023 , 2 , 79–95. [ Google Scholar ]
  • Zhu, N.; Wiredu, I.; Agyemang, A.O.; Osei, A. Addressing corporate governance and carbon accounting disclosure gaps: A path toward firms commitment to sustainable development goal 13. Sustain. Dev. 2024 , 1–16. [ Google Scholar ] [ CrossRef ]
  • Zhu, N.; Osei, A.; Agyemang, A.O. Do board attributes influence environmental sustainability disclosure in manufacturing firms? Corp. Soc. Responsib. Environ. Manag. 2024 , 1–13. [ Google Scholar ] [ CrossRef ]
  • Abdullah; Zhu, N.; Hashmi, M.A.; Shah, M.H. CEO power, board features and ESG performance: An extensive novel moderation analysis. Corp. Soc. Responsib. Environ. Manag. 2024 , 1–29. [ Google Scholar ] [ CrossRef ]
  • Cumming, D.; Leung, T.Y.; Rui, O. Gender diversity and securities fraud. Acad. Manag. J. 2015 , 58 , 1572–1593. [ Google Scholar ] [ CrossRef ]
  • Liao, J.; Smith, D.; Liu, X. Female CFOs and Accounting Fraud: Evidence from China. Pac.-Basin Financ. J 53 . [ CrossRef ]
  • Luo, J.; Peng, C.; Zhang, X. The Impact of CFO Gender on Corporate Fraud: Evidence from China. Pac.-Basin Financ. J. 2020 , 63 , 101404. [ Google Scholar ] [ CrossRef ]
  • Nekhili, M.; Gull, A.A.; Chtioui, T.; Radhouane, I. Gender-diverse boards and audit fees: What difference does gender quota legislation make? J. Bus. Financ. Account. 2020 , 47 , 52–99. [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

TypeNameSymbolDefinitions
Dependent VariableCorporate FraudFraudFraud is a dummy variable that equals one if the firm commits fraud, and zero otherwise.
Independent VariableDirectors’ Network PositionCentralityThe semi-local centrality of the top node.
Mediating VariableInternal Control DBIDIBO Internal Control Index
External AuditBig 4Audit by a Big 4 international firm takes the value of 1, otherwise 0.
Control VariableLeverage RatioLevTotal liabilities at the end of the year divided by total assets at the end of the year.
Enterprise ValueTobin’s QRatio of the market value of an enterprise’s assets to their replacement cost.
Ownership ConcentrationTop 1Shareholding of the first largest shareholder among all shareholders.
Return on Net AssetsROEThe ratio of a firm’s net profit to its average net worth, reflecting the level of compensation received by owners’ equity.
Company SizeSizeThe natural logarithm of the company’s total assets.
Time to MarketAgeCompany listing age.
Board SizeBoardNumber of board members.
Nature of Property RightsSOEDepending on the nature of the company’s beneficial owner, the variable takes the value of 1 for state-owned firms and 0 for non-state-owned firms.
Shareholding Ratio of Institutional InvestorsIndshInstitutional investor shareholding as a percentage of total equity.
CEO dualityDualThe dummy variable equals 1 if the Chairman and CEO are the same person, 0 otherwise.
IndustryIndIndustry dummy variables are assigned as binary indicators based on the standard industry classification, with a value of 1 for companies in a specific industry and 0 for all others.
YearYearYear dummy variables are assigned a value of 1 for the relevant. year and 0 for all other years.
VariantNMeanStd. Dev.MedianMinMaxVIF
Fraud434390.3630.4810.0000.0001.000-
Centrality434393.5301.2543.8500.0005.2361.13
Lev434390.4551.2950.455−0.195178.3451.40
Tobin’s Q434392.55171.7832.0550.00014,810.3061.39
Top14343934.16215.2234.1620.000100.0001.24
ROE434390.0314.4050.031−207.397713.2041.00
Size4343922.0791.37317.56011.34828.6361.05
Age4343913.938.50213.0000.00032.0001.31
Boardsize4343910.4463.69610.0004.00058.0001.12
SOE434390.340.4740.0000.0001.0001.44
Indsh4343938.91325.5438.9130.000144.6751.31
Dual434390.2860.4520.0000.0001.0001.00
VariantNon-Corporate Fraud CompaniesCorporate Fraud CompaniesTest of Difference
N = 27,652N = 15,787
Average ValueAverage Value
Centrality54.03752.2894.25 ***
Lev0.4350.490−4.35 ***
Tobin’s Q2.6462.1200.60
Top135.44631.91023.45 ***
ROE0.099−0.0874.20 ***
Size22.12921.99110.15 ***
Age13.63914.440−9.45 ***
Board9.92011.369−40.05 ***
SOE0.3690.28917.15 ***
Indsh40.25236.56514.50 ***
Dual0.2900.2792.50 **
Variable(1)(2)
Centrality−0.030 ***−0.056 ***
(−3.23)(−6.41)
Lev 0.113 ***
(3.85)
Tobin’s Q −0.001 ***
(−3.22)
Top 1 −0.008 ***
(−9.58)
ROE −0.031 ***
(−3.06)
Size −0.000 ***
(−6.71)
Age 0.017 ***
(10.21)
Boardsize 0.119 ***
(35.47)
SOE −0.482 ***
(−17.08)
Indsh −0.004 ***
(−6.92)
Dual −0.031
(−1.28)
YearControlControl
IndustryControlControl
Constant−1.081−0.918
(−6.66)(−5.72)
R 0.0670.028
Observations4343943439
Variable(1)(2)(3)(4)
DBIFraudBig4Fraud
Big4 −0.239 ***
(−4.20)
DBI −0.128 ***
(−4.2)
Centrality0.174 ***−0.041 ***0.074 ***−0.055 ***
(3.17)(−6.36)(3.17)(−6.36)
Lev−3.3860.631 ***0.0300.113 ***
(0.85)(3.85)(0.85)(3.85)
Tobin’s Q−0.154 ***0.022 ***−0.088 ***−0.001 ***
(−4.27)(−3.22)(−4.27)(−3.22)
Top 10.018 *−0.006 ***−0.003 *−0.008 ***
(−1.69)(−9.59)(−1.69)(−9.59)
ROE1.308−0.796 ***0.004−0.031 ***
(0.48)(−3.06)(0.48)(−3.06)
Size0.000 ***−0.000 ***0.000 ***−0.000 ***
(18.64)(−5.83)(18.64)(−5.83)
Age−0.0810.008 ***−0.0020.017 ***
(−0.51)(10.11)(−0.51)(10.11)
Boardsize−0.1240.110 ***0.0010.119 ***
(0.10)(35.47)(0.10)(35.47)
SOE0.501 ***−0.485 ***0.190 ***−0.481 ***
(3.08)(−17.04)(3.08)(−17.04)
Indsh0.000 ***−0.003 ***0.033 ***−0.003 ***
(25.24)(−6.40)(25.24)(−6.40)
Dual−0.003−0.0320.071−0.031
(1.26)(−1.27)(1.26)(−1.27)
YearControlControlControlControl
IndustryControlControlControlControl
Constant8.083−0.428−4.183−1.077
(20.78)(−2.43)(−9.70)(−6.64)
R 0.1940.0670.1940.067
Observations43,43943,43943,43943,439
VariableFraudFraudFraudFraudFraud
HeckmanInstrumental VariableLag One PhaseFixed EffectSmall Company
(1)(2)(3)(4)(5)(6)
Centrality −0.056 ***−1.394 ***−0.024 ***−0.058 ***−0.012 ***−0.026 *
(−6.41)(−17.56)(−3.48)(−6.26)(−6.55)(−1.65)
Lev0.009 *0.108 ***−0.0100.005 **0.379 ***0.005 **0.009
(1.77)(3.66)(−1.33)(2.28)(8.11)(2.42)(0.57)
Tobin’s Q0.005−0.004 ***0.000−0.000−0.002−0.000 *−0.000
(1.50)(−6.83)(0.48)(−1.60)(−1.29)(−1.67)(−0.66)
Top 10.001−0.008 ***−0.002 ***−0.002 ***−0.008 ***−0.002 ***−0.008 ***
(1.26)(−9.90)(−3.60)(−9.86)(−8.66)(−9.74)(−4.35)
ROE−0.002−0.030 ***0.004 *−0.001 **−0.024 **−0.001 ***−0.013
(−1.19)(−2.93)(1.88)(−2.57)(−2.56)(−2.69)(−1.62)
Size−0.000 ***−0.000 ***0.000 ***−0.000 ***−0.000 ***−0.000 ***−0.000
(−3.21)(−5.94)(2.99)(−5.48)(−6.68)(−5.79)(−0.22)
Age−0.010 ***0.023 ***0.015 ***0.004 ***0.015 ***0.004 ***0.037 ***
(−10.16)(12.93)(10.91)(10.71)(8.72)(10.73)(10.23)
Boardsize−0.059 ***0.154 ***0.110 ***0.026 ***0.091 ***0.025 ***0.118 ***
(−27.56)(23.03)(21.55)(34.14)(27.81)(39.29)(16.16)
SOE−0.215 ***−0.355 ***0.223 ***−0.099 ***−0.430 ***−0.103 ***−0.388 ***
(−12.89)(−10.39)(7.63)(−15.96)(−14.78)(−17.35)(−6.00)
Indsh−0.003 ***−0.002 **0.004 ***−0.001 ***−0.004 ***−0.001 ***−0.001
(−11.34)(−2.37)(7.93)(−6.44)(−6.76)(−7.10)(−1.46)
Dual0.019−0.042 *−0.022−0.008−0.009−0.007−0.118 **
(1.36)(−1.76)(−1.16)(−1.61)(−0.38)(−1.39)(−2.44)
IMR 0.587 ***
(6.04)
YearControlControlControlControlControlControlControl
IndustryControlControlControlControlControlControlControl
Constant1.518−1.9723.0220.307−0.8890.338−2.159
(15.20)(−10.72)(14.25)(8.02)(−5.27)(10.11)(−6.55)
R20.0670.0670.0000.0810.0480.0640.066
Observations43,43943,43943,43943,43938,24443,34510,824
VariableFraudFraud EventFraudFraud
(1)(2)(3)(4)(5)(6)(7)
Centrality −0.021 ***−0.051 ***−0.053 ***
(−15.89)(−4.85)(−5.56)
Centrality-b−0.010 ***
(−2.62)
Degree −0.165 ***
(−6.23)
Closeness −1.568 ***
(−5.24)
Betweenness −119.265 ***
(−5.19)
Lev0.759 ***0.188 ***0.190 ***0.193 ***0.008 ***0.181 ***0.057 **
(12.84)(5.37)(5.40)(5.45)(5.27)(3.94)(2.53)
Tobin’s Q0.033 ***−0.002 ***−0.002 ***−0.002 ***−0.000 ***0.003−0.001 **
(4.26)(−4.68)(−4.69)(−4.75)(−2.70)(0.68)(−1.99)
Top 1−0.006 ***−0.012 ***−0.012 ***−0.012 ***−0.001 ***−0.009 ***−0.007 ***
(−7.42)(−12.49)(−12.28)(−12.19)(−6.07)(−8.54)(−7.29)
ROE−0.992 ***−0.027 ***−0.027 ***−0.027 ***−0.002 ***−0.017 *−0.019 **
(−14.95)(−2.87)(−2.93)(−2.95)(−5.23)(−1.95)(−2.10)
Size−0.000 ***−0.000 ***−0.000 ***−0.000 ***−0.000 ***−0.000 ***−0.000 ***
(−9.22)(−7.76)(−7.82)(−7.73)(−6.04)(−5.61)(−6.18)
Age0.001 ***0.027 ***0.026 ***0.026 ***0.004 ***0.015 ***0.015 ***
(6.11)(13.75)(13.39)(13.35)(15.09)(6.92)(7.90)
Boardsize0.113 ***0.173 ***0.168 ***0.167 ***0.036 ***0.121 ***0.125 ***
(32.77)(44.90)(45.47)(45.37)(79.48)(27.88)(33.81)
SOE−0.521 ***−0.719 ***−0.722 ***−0.727 ***−0.104 ***−0.427 ***−0.487 ***
(−18.32)(−21.55)(−21.67)(−21.85)(−24.30)(−11.79)(−15.79)
Indsh−0.003 ***−0.005 ***−0.005 ***−0.005 ***−0.001 ***−0.003 ***−0.003 ***
(−6.37)(−8.27)(−8.15)(−8.04)(−7.96)(−4.32)(−5.99)
Dual−0.032 **−0.023−0.022−0.022−0.003−0.028−0.031
(−1.33)(−0.81)(−0.79)(−0.79)(−0.07)(−0.96)(−1.15)
YearControlControlControlControlControlControlControl
IndustryControlControlControlControlControlControlControl
Constant−1.421−2.014−2.238−2.287−0.127−1.462−1.067
(−8.60)(−10.74)(−12.32)(−12.62)(−5.01)(−11.14)(−6.07)
R20.0730.1550.1550.1550.1790.0610.058
Observations43,33843,43943,43943,43943,34528,29734,004
VariableFraudFraudFraud
Non-Independent Directors Network (1)Independent Directors Network (2)Women Directors Network (3)
Centrality−1.161 ***−0.955 ***−0.049 ***
(−21.05)(−19.01)(−2.60)
Lev0.202 ***0.231 ***0.191 ***
(5.57)(6.10)(5.40)
Tobin’s Q−0.002 ***−0.002 ***−0.002 ***
(−4.86)(−5.41)(−4.70)
Top 1−0.013 ***−0.014 ***−0.012 ***
(−13.08)(−14.20)(−12.11)
ROE−0.030 ***−0.030 ***−0.027 ***
(−3.15)(−3.15)(−2.95)
Size−0.000 ***−0.000 ***−0.000 ***
(−6.20)(−6.57)(−8.02)
Age0.033 ***0.025 ***0.025 ***
(16.43)(13.45)(13.06)
Boardsize0.209 ***0.202 ***0.167 ***
(46.09)(46.47)(45.32)
SOE−0.608 ***−0.598 ***−0.729 ***
(−17.83)(−17.90)(−21.88)
Indsh−0.004 ***−0.004 ***−0.005 ***
(−5.80)(−6.04)(−8.16)
Dual−0.026−0.023−0.020
(−0.94)(−0.82)(−0.71)
YearControlControlControl
IndustryControlControlControl
Constant0.6270.248−2.329
(2.73)(1.11)(−12.87)
R 0.1160.1150.115
Observations42,63042,62743,312
VariableFraudFraudFraud
(1)(2)(3)
Centrality−0.019 **−0.003−0.070 ***
(−2.21)(−0.29)(−3.90)
Lev0.114 ***0.114 ***0.113 ***
(3.86)(3.86)(3.85)
Tobin’s Q−0.001 ***−0.001 ***−0.001 ***
(−3.25)(−3.25)(−3.23)
Top 1−0.008 ***−0.008 ***−0.008 ***
(−9.57)(−9.55)(−9.63)
ROE−0.031 ***−0.032 ***−0.032 ***
(−3.09)(−3.10)(−3.10)
Size−0.000 ***−0.000 ***−0.000 ***
(−6.85)(−6.90)(−6.76)
Age0.016 ***0.016 ***0.016 ***
(10.01)(9.95)(9.99)
Boardsize0.117 ***0.116 ***0.117 ***
(34.46)(34.05)(35.23)
SOE−0.493 ***−0.494 ***−0.491 ***
(−17.53)(−17.57)(−17.42)
Indsh−0.004 ***−0.004 ***−0.004 ***
(−7.14)(−7.25)(−6.94)
Dual−0.030−0.030−0.030
(−1.26)(−1.25)(−1.26)
YearControlControlControl
IndustryControlControlControl
Constant−1.201−1.196−1.196
(−7.44)(−7.41)(−7.41)
R 0.0660.0660.066
Observations43,33843,33843,338
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Zeng, S.; Xiao, L.; Jiang, X.; Huang, Y.; Li, Y.; Yuan, C. Do Directors’ Network Positions Affect Corporate Fraud? Sustainability 2024 , 16 , 6675. https://doi.org/10.3390/su16156675

Zeng S, Xiao L, Jiang X, Huang Y, Li Y, Yuan C. Do Directors’ Network Positions Affect Corporate Fraud? Sustainability . 2024; 16(15):6675. https://doi.org/10.3390/su16156675

Zeng, Sen, Longjun Xiao, Xueyan Jiang, Yiqian Huang, Yanru Li, and Cao Yuan. 2024. "Do Directors’ Network Positions Affect Corporate Fraud?" Sustainability 16, no. 15: 6675. https://doi.org/10.3390/su16156675

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • DOI: 10.1080/2153599x.2024.2378992
  • Corpus ID: 271815824

Testing the religion/spirituality-mental health curvilinear hypothesis using data from many-analysts religion project

  • Luke Galen , David Speed
  • Published in Religion, Brain &amp… 8 August 2024
  • Religion, Brain &amp; Behavior

49 References

Religion and well-being: what is the magnitude and the practical significance of the relationship, religiosity and death anxiety among cancer patients: the mediating role of religious comfort and struggle, the uncertain certainty: a mixed methods exploration of personal meanings of death and preliminary insights into their relationship with worldview., temporal associations between religiosity and subjective well-being in a nationally representative australian sample, religiosity/spirituality and mental health: evidence of curvilinear relationships in a sample of religious people, spirituals, atheists, and agnostics, prospective associations between social connectedness and mental health. evidence from a longitudinal survey and health insurance claims data, throw babe out with the bathwater canadian atheists are no less healthy than the religious, the relationship between religious/spiritual beliefs and subjective well-being: a case-based comparative cross-national study, societal emphasis on religious faith as a cultural context for shaping the social-psychological relationships between personal religiosity and well-being, attachment to god and psychological distress: evidence of a curvilinear relationship, related papers.

Showing 1 through 3 of 0 Related Papers

bioRxiv

Spatial transcriptomic data reveals pure cell types via the mosaic hypothesis

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Yiliu Wang
  • For correspondence: [email protected]
  • ORCID record for Christof Koch
  • ORCID record for Uygar Sümbül
  • Info/History
  • Preview PDF

Neurons display remarkable diversity in their anatomical, molecular, and physiological properties. Although observed stereotypy in subsets of neurons is a pillar of neuroscience, clustering in high-dimensional feature spaces, such as those defined by single cell RNA-seq data, is often inconclusive and cells seemingly occupy continuous, rather than discrete, regions. In the retina, a layered structure, neurons of the same discrete type avoid spatial proximity with each other. While this principle, which is independent of clustering in feature space, has been a gold standard for retinal cell types, its applicability to the cortex has been only sparsely explored. Here, we provide evidence for such a mosaic hypothesis by developing a statistical point process analysis framework for spatial transcriptomic data. We demonstrate spatial avoidance across many excitatory and inhibitory neuronal types. Spatial avoidance disappears when cell types are merged, potentially offering a gold standard metric for evaluating the purity of putative cell types.

Competing Interest Statement

The authors have declared no competing interest.

https://alleninstitute.github.io/abc_atlas_access/

View the discussion thread.

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Twitter logo

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
  • Animal Behavior and Cognition (5514)
  • Biochemistry (12536)
  • Bioengineering (9412)
  • Bioinformatics (30776)
  • Biophysics (15825)
  • Cancer Biology (12893)
  • Cell Biology (18471)
  • Clinical Trials (138)
  • Developmental Biology (9982)
  • Ecology (14940)
  • Epidemiology (2067)
  • Evolutionary Biology (19135)
  • Genetics (12722)
  • Genomics (17514)
  • Immunology (12651)
  • Microbiology (29656)
  • Molecular Biology (12348)
  • Neuroscience (64615)
  • Paleontology (479)
  • Pathology (1998)
  • Pharmacology and Toxicology (3449)
  • Physiology (5321)
  • Plant Biology (11065)
  • Scientific Communication and Education (1728)
  • Synthetic Biology (3059)
  • Systems Biology (7679)
  • Zoology (1728)

IMAGES

  1. Statistical Hypothesis Testing step by step procedure

    data science hypothesis testing

  2. Hypothesis Testing Solved Examples(Questions and Solutions)

    data science hypothesis testing

  3. How to do Hypothesis Testing : A Beginner Guide For Data Scientist

    data science hypothesis testing

  4. Hypothesis Testing

    data science hypothesis testing

  5. What is Hypothesis Testing?

    data science hypothesis testing

  6. Guide to Hypothesis Testing for Data Scientists

    data science hypothesis testing

COMMENTS

  1. Hypothesis testing for data scientists

    Hypothesis testing is a common statistical tool used in research and data science to support the certainty of findings. The aim of testing is to answer how probable an apparent effect is detected by chance given a random data sample. This article provides a detailed explanation of the key concepts in Frequentist hypothesis testing using problems from the business domain as examples.

  2. Introduction to Hypothesis Testing with Examples

    In binary hypothesis testing problems, we'll often be presented with two choices which we call hypotheses, and we'll have to decide whether to pick one or the other. The hypotheses are represented by H₀ and H₁ and are called null and alternate hypotheses respectively. In hypothesis testing, we either reject or accept the null hypothesis.

  3. A Complete Guide to Hypothesis Testing

    Hypothesis testing is a method of statistical inference that considers the null hypothesis H ₀ vs. the alternative hypothesis H a, where we are typically looking to assess evidence against H ₀. Such a test is used to compare data sets against one another, or compare a data set against some external standard. The former being a two sample ...

  4. Hypothesis Testing Guide for Data Science Beginners

    Learn how to perform hypothesis testing for data science projects with this easy guide. Find examples, tips, and related articles on statistics and machine learning.

  5. Hypothesis Testing in Data Science: A Comprehensive Guide

    Hypothesis Testing in Data Science is a crucial method for making informed decisions from data. This blog explores its essential usage in analysing trends and patterns, and the different types such as null, alternative, one-tailed, and two-tailed tests, providing a comprehensive understanding for both beginners and advanced practitioners.

  6. Mastering Hypothesis Testing: A Comprehensive Guide for ...

    1. Introduction to Hypothesis Testing - Definition and significance in research and data analysis. - Brief historical background. 2. Fundamentals of Hypothesis Testing - Null and Alternative…

  7. Data Science from Scratch (ch7)

    Data Science from Scratch (ch7) - Hypothesis and Inference. Connecting probability and statistics to hypothesis testing and inference. This is a continuation of my progress through Data Science from Scratch by Joel Grus. We'll use a classic coin-flipping example in this post because it is simple to illustrate with both .

  8. Understanding Hypothesis Testing in Data Science: T-tests, F ...

    Statistical analysis forms the backbone of any data science workflow. Among the statistical concepts we regularly encounter in data science, Hypothesis Testing is one of the most essential.

  9. Hypothesis Testing in Data Science [Types, Process, Example]

    Explore what is hpothesis testing is in data science? Read on more to understand the types of hypothesis testing, importance, workflow & real world examples of statistical hypothesis.

  10. A Crash Course on Hypothesis Testing

    Hypothesis testing is a quintessential part of statistical inference in data science context.

  11. Statistical Inference and Hypothesis Testing in Data Science

    This course will focus on theory and implementation of hypothesis testing, especially as it relates to applications in data science. Students will learn to use hypothesis tests to make informed decisions from data.

  12. Hypothesis testing

    Hypothesis testing can be thought of as a way to investigate the consistency of a dataset with a model, where a model is a set of rules that describe how data are generated. The consistency is evaluated using ideas from probability and probability distributions.

  13. Hypothesis Testing Steps & Examples

    Hypothesis testing is a technique that helps scientists, researchers, or for that matter, anyone test the validity of their claims or hypotheses about real-world or real-life events in order to establish new knowledge. Hypothesis testing techniques are often used in statistics and data science to analyze whether the claims about the occurrence of the events are true, whether the results ...

  14. Hypothesis Testing in Data Science

    Hypothesis Testing vs Hypothesis Generation In the world of Data Science, there are two parts to consider when putting together a hypothesis. Hypothesis Testing is when the team builds a strong hypothesis based on the available dataset. This will help direct the team and plan accordingly throughout the data science project.

  15. 17 Statistical Hypothesis Tests in Python (Cheat Sheet)

    Quick-reference guide to the 17 statistical hypothesis tests that you need in applied machine learning, with sample code in Python. Although there are hundreds of statistical hypothesis tests that you could use, there is only a small subset that you may need to use in a machine learning project. In this post, you will discover […]

  16. Understanding Hypothesis Testing

    Hypothesis testing is an important mathematical concept that's used in the field of data science. While it's really easy to call a random method from a python library that'll carry out the test for you, it's both necessary and interesting to know what is actually happening behind the scenes!

  17. Hypothesis Testing

    Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories.

  18. Understanding Hypothesis Testing

    Hypothesis testing is a statistical method that is used to make a statistical decision using experimental data. Hypothesis testing is basically an assumption that we make about a population parameter. It evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data.

  19. Hypothesis Tests

    Hypothesis tests # Formal hypothesis testing is perhaps the most prominent and widely-employed form of statistical analysis. It is sometimes seen as the most rigorous and definitive part of a statistical analysis, but it is also the source of many statistical controversies. The currently-prevalent approach to hypothesis testing dates to developments that took place between 1925 and 1940 ...

  20. 5 Statistical Skills That Will Make You Stand Out in a Data Science

    During an interview, emphasizing your proficiency in hypothesis testing demonstrates your ability to approach problems scientifically and draw meaningful conclusions from data. When discussing hypothesis testing, focus on your ability to formulate clear null and alternative hypotheses, select appropriate significance levels, and interpret p-values.

  21. What is Hypothesis Testing in Statistics? Types and Examples

    Hypothesis testing is a statistical method used to determine if there is enough evidence in a sample data to draw conclusions about a population. It involves formulating two competing hypotheses, the null hypothesis (H0) and the alternative hypothesis (Ha), and then collecting data to assess the evidence.

  22. z -test in statistics! hypothesis testing

    Learn how to perform a z-test in statistics and understand its significance in hypothesis testing for data science and data analysis. This tutorial will guid...

  23. [2408.04213] Hypothesis testing for general network models

    The network data has attracted considerable attention in modern statistics. In research on complex network data, one key issue is finding its underlying connection structure given a network sample. The methods that have been proposed in literature usually assume that the underlying structure is a known model. In practice, however, the true model is usually unknown, and network learning ...

  24. Hypothesis Testing

    Understanding the intuition behind Hypothesis Testing. What exactly it is, why do we do it and how do Data Scientists perform it. Let's…

  25. Fast and Accurate LSTM Meta-modeling of TNF-induced Tumor ...

    Multi-level, hybrid models and simulations are essential to enable predictions and hypothesis generation in systems biology research. However, the computational complexity of these models poses a bottleneck, limiting the applicability of methodologies relying on large number of simulations, such as the Optimization via Simulation (OvS) of complex biological processes.

  26. Do Directors' Network Positions Affect Corporate Fraud?

    Corporate fraud poses a significant obstacle for sustainable business development. Drawing on social network analysis, this paper used data originated from Chinese-listed companies from 2009 to 2022 and found that directors' network position significantly mitigates corporate fraud. Mechanism tests indicated that the quality of external auditors and internal control play a mediating role in ...

  27. 10 Statistics Questions to Ace Your Data Science Interview

    I spent time scouring the Internet for resources to better understand concepts like hypothesis testing and confidence intervals. And after interviewing for multiple data science positions, I've found that most statistics interview questions followed a similar pattern.

  28. Testing the religion/spirituality-mental health curvilinear hypothesis

    Semantic Scholar extracted view of "Testing the religion/spirituality-mental health curvilinear hypothesis using data from many-analysts religion project" by Luke Galen et al.

  29. Hypothesis Testing with Python: Step by step hands-on tutorial with

    In this article, I want to show hypothesis testing with Python on several questions step-by-step. But before, let me explain the hypothesis testing process briefly. If you wish, you can move to the questions directly.

  30. Spatial transcriptomic data reveals pure cell types via the mosaic

    Here, we provide evidence for such a mosaic hypothesis by developing a statistical point process analysis framework for spatial transcriptomic data. We demonstrate spatial avoidance across many excitatory and inhibitory neuronal types.