Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Hypothesis Testing | A Step-by-Step Guide with Easy Examples

Published on November 8, 2019 by Rebecca Bevans . Revised on June 22, 2023.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics . It is most often used by scientists to test specific predictions, called hypotheses, that arise from theories.

There are 5 main steps in hypothesis testing:

  • State your research hypothesis as a null hypothesis and alternate hypothesis (H o ) and (H a  or H 1 ).
  • Collect data in a way designed to test the hypothesis.
  • Perform an appropriate statistical test .
  • Decide whether to reject or fail to reject your null hypothesis.
  • Present the findings in your results and discussion section.

Though the specific details might vary, the procedure you will use when testing a hypothesis will always follow some version of these steps.

Table of contents

Step 1: state your null and alternate hypothesis, step 2: collect data, step 3: perform a statistical test, step 4: decide whether to reject or fail to reject your null hypothesis, step 5: present your findings, other interesting articles, frequently asked questions about hypothesis testing.

After developing your initial research hypothesis (the prediction that you want to investigate), it is important to restate it as a null (H o ) and alternate (H a ) hypothesis so that you can test it mathematically.

The alternate hypothesis is usually your initial hypothesis that predicts a relationship between variables. The null hypothesis is a prediction of no relationship between the variables you are interested in.

  • H 0 : Men are, on average, not taller than women. H a : Men are, on average, taller than women.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

For a statistical test to be valid , it is important to perform sampling and collect data in a way that is designed to test your hypothesis. If your data are not representative, then you cannot make statistical inferences about the population you are interested in.

There are a variety of statistical tests available, but they are all based on the comparison of within-group variance (how spread out the data is within a category) versus between-group variance (how different the categories are from one another).

If the between-group variance is large enough that there is little or no overlap between groups, then your statistical test will reflect that by showing a low p -value . This means it is unlikely that the differences between these groups came about by chance.

Alternatively, if there is high within-group variance and low between-group variance, then your statistical test will reflect that with a high p -value. This means it is likely that any difference you measure between groups is due to chance.

Your choice of statistical test will be based on the type of variables and the level of measurement of your collected data .

  • an estimate of the difference in average height between the two groups.
  • a p -value showing how likely you are to see this difference if the null hypothesis of no difference is true.

Based on the outcome of your statistical test, you will have to decide whether to reject or fail to reject your null hypothesis.

In most cases you will use the p -value generated by your statistical test to guide your decision. And in most cases, your predetermined level of significance for rejecting the null hypothesis will be 0.05 – that is, when there is a less than 5% chance that you would see these results if the null hypothesis were true.

In some cases, researchers choose a more conservative level of significance, such as 0.01 (1%). This minimizes the risk of incorrectly rejecting the null hypothesis ( Type I error ).

The results of hypothesis testing will be presented in the results and discussion sections of your research paper , dissertation or thesis .

In the results section you should give a brief summary of the data and a summary of the results of your statistical test (for example, the estimated difference between group means and associated p -value). In the discussion , you can discuss whether your initial hypothesis was supported by your results or not.

In the formal language of hypothesis testing, we talk about rejecting or failing to reject the null hypothesis. You will probably be asked to do this in your statistics assignments.

However, when presenting research results in academic papers we rarely talk this way. Instead, we go back to our alternate hypothesis (in this case, the hypothesis that men are on average taller than women) and state whether the result of our test did or did not support the alternate hypothesis.

If your null hypothesis was rejected, this result is interpreted as “supported the alternate hypothesis.”

These are superficial differences; you can see that they mean the same thing.

You might notice that we don’t say that we reject or fail to reject the alternate hypothesis . This is because hypothesis testing is not designed to prove or disprove anything. It is only designed to test whether a pattern we measure could have arisen spuriously, or by chance.

If we reject the null hypothesis based on our research (i.e., we find that it is unlikely that the pattern arose by chance), then we can say our test lends support to our hypothesis . But if the pattern does not pass our decision rule, meaning that it could have arisen by chance, then we say the test is inconsistent with our hypothesis .

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Descriptive statistics
  • Measures of central tendency
  • Correlation coefficient

Methodology

  • Cluster sampling
  • Stratified sampling
  • Types of interviews
  • Cohort study
  • Thematic analysis

Research bias

  • Implicit bias
  • Cognitive bias
  • Survivorship bias
  • Availability heuristic
  • Nonresponse bias
  • Regression to the mean

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Null and alternative hypotheses are used in statistical hypothesis testing . The null hypothesis of a test always predicts no effect or no relationship between variables, while the alternative hypothesis states your research prediction of an effect or relationship.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bevans, R. (2023, June 22). Hypothesis Testing | A Step-by-Step Guide with Easy Examples. Scribbr. Retrieved September 3, 2024, from https://www.scribbr.com/statistics/hypothesis-testing/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Other students also liked, choosing the right statistical test | types & examples, understanding p values | definition and examples, what is your plagiarism score.

Module 9: Hypothesis Testing With One Sample

Distribution needed for hypothesis testing, learning outcomes.

  • Conduct and interpret hypothesis tests for a single population mean, population standard deviation known
  • Conduct and interpret hypothesis tests for a single population mean, population standard deviation unknown

Earlier in the course, we discussed sampling distributions.  Particular distributions are associated with hypothesis testing. Perform tests of a population mean using a normal distribution or a Student’s t- distribution . (Remember, use a Student’s t -distribution when the population standard deviation is unknown and the distribution of the sample mean is approximately normal.) We perform tests of a population proportion using a normal distribution (usually n is large or the sample size is large).

If you are testing a  single population mean , the distribution for the test is for means :

[latex]\displaystyle\overline{{X}}\text{~}{N}{\left(\mu_{{X}}\text{ , }\frac{{\sigma_{{X}}}}{\sqrt{{n}}}\right)}{\quad\text{or}\quad}{t}_{{{d}{f}}}[/latex]

The population parameter is [latex]\mu[/latex]. The estimated value (point estimate) for [latex]\mu[/latex] is [latex]\displaystyle\overline{{x}}[/latex], the sample mean.

If you are testing a  single population proportion , the distribution for the test is for proportions or percentages:

[latex]\displaystyle{P}^{\prime}\text{~}{N}{\left({p}\text{ , }\sqrt{{\frac{{{p}{q}}}{{n}}}}\right)}[/latex]

The population parameter is [latex]p[/latex]. The estimated value (point estimate) for [latex]p[/latex] is p′ . [latex]\displaystyle{p}\prime=\frac{{x}}{{n}}[/latex] where [latex]x[/latex] is the number of successes and [latex]n[/latex] is the sample size.

Assumptions

When you perform a  hypothesis test of a single population mean μ using a Student’s t -distribution (often called a t-test), there are fundamental assumptions that need to be met in order for the test to work properly. Your data should be a simple random sample that comes from a population that is approximately normally distributed . You use the sample standard deviation to approximate the population standard deviation. (Note that if the sample size is sufficiently large, a t-test will work even if the population is not approximately normally distributed).

When you perform a  hypothesis test of a single population mean μ using a normal distribution (often called a z -test), you take a simple random sample from the population. The population you are testing is normally distributed or your sample size is sufficiently large. You know the value of the population standard deviation which, in reality, is rarely known.

When you perform a  hypothesis test of a single population proportion p , you take a simple random sample from the population. You must meet the conditions for a binomial distribution which are as follows: there are a certain number n of independent trials, the outcomes of any trial are success or failure, and each trial has the same probability of a success p . The shape of the binomial distribution needs to be similar to the shape of the normal distribution. To ensure this, the quantities np  and nq must both be greater than five ( np > 5 and nq > 5). Then the binomial distribution of a sample (estimated) proportion can be approximated by the normal distribution with μ = p and [latex]\displaystyle\sigma=\sqrt{{\frac{{{p}{q}}}{{n}}}}[/latex] . Remember that q = 1 – p .

Concept Review

In order for a hypothesis test’s results to be generalized to a population, certain requirements must be satisfied.

When testing for a single population mean:

  • A Student’s t -test should be used if the data come from a simple, random sample and the population is approximately normally distributed, or the sample size is large, with an unknown standard deviation.
  • The normal test will work if the data come from a simple, random sample and the population is approximately normally distributed, or the sample size is large, with a known standard deviation.

When testing a single population proportion use a normal test for a single population proportion if the data comes from a simple, random sample, fill the requirements for a binomial distribution, and the mean number of success and the mean number of failures satisfy the conditions:  np > 5 and nq > n where n is the sample size, p is the probability of a success, and q is the probability of a failure.

Formula Review

If there is no given preconceived  α , then use α = 0.05.

Types of Hypothesis Tests

  • Single population mean, known population variance (or standard deviation): Normal test .
  • Single population mean, unknown population variance (or standard deviation): Student’s t -test .
  • Single population proportion: Normal test .
  • For a single population mean , we may use a normal distribution with the following mean and standard deviation. Means: [latex]\displaystyle\mu=\mu_{{\overline{{x}}}}{\quad\text{and}\quad}\sigma_{{\overline{{x}}}}=\frac{{\sigma_{{x}}}}{\sqrt{{n}}}[/latex]
  • A single population proportion , we may use a normal distribution with the following mean and standard deviation. Proportions: [latex]\displaystyle\mu={p}{\quad\text{and}\quad}\sigma=\sqrt{{\frac{{{p}{q}}}{{n}}}}[/latex].
  • Distribution Needed for Hypothesis Testing. Provided by : OpenStax. Located at : . License : CC BY: Attribution
  • Introductory Statistics . Authored by : Barbara Illowski, Susan Dean. Provided by : Open Stax. Located at : http://cnx.org/contents/[email protected] . License : CC BY: Attribution . License Terms : Download for free at http://cnx.org/contents/[email protected]

7.4.1 - Hypothesis Testing

Five step hypothesis testing procedure.

In the remaining lessons, we will use the following five step hypothesis testing procedure. This is slightly different from the five step procedure that we used when conducting randomization tests. 

  • Check assumptions and write hypotheses.  The assumptions will vary depending on the test. In this lesson we'll be confirming that the sampling distribution is approximately normal by visually examining the randomization distribution. In later lessons you'll learn more objective assumptions. The null and alternative hypotheses will always be written in terms of population parameters; the null hypothesis will always contain the equality (i.e., \(=\)).
  • Calculate the test statistic.  Here, we'll be using the formula below for the general form of the test statistic.
  • Determine the p-value.  The p-value is the area under the standard normal distribution that is more extreme than the test statistic in the direction of the alternative hypothesis.
  • Make a decision.  If \(p \leq \alpha\) reject the null hypothesis. If \(p>\alpha\) fail to reject the null hypothesis.
  • State a "real world" conclusion.  Based on your decision in step 4, write a conclusion in terms of the original research question.

General Form of a Test Statistic

When using a standard normal distribution (i.e., z distribution), the test statistic is the standardized value that is the boundary of the p-value. Recall the formula for a z score: \(z=\frac{x-\overline x}{s}\). The formula for a test statistic will be similar. When conducting a hypothesis test the sampling distribution will be centered on the null parameter and the standard deviation is known as the standard error.

This formula puts our observed sample statistic on a standard scale (e.g., z distribution). A z score tells us where a score lies on a normal distribution in standard deviation units. The test statistic tells us where our sample statistic falls on the sampling distribution in standard error units.

7.4.1.1 - Video Example: Mean Body Temperature

Research question:  Is the mean body temperature in the population different from 98.6° Fahrenheit?

7.4.1.2 - Video Example: Correlation Between Printer Price and PPM

Research question:  Is there a positive correlation in the population between the price of an ink jet printer and how many pages per minute (ppm) it prints?

7.4.1.3 - Example: Proportion NFL Coin Toss Wins

Research question:  Is the proportion of NFL overtime coin tosses that are won different from 0.50?

StatKey was used to construct a randomization distribution:

Step 1: Check assumptions and write hypotheses

From the given StatKey output, the randomization distribution is approximately normal.

\(H_0\colon p=0.50\)

\(H_a\colon p \ne 0.50\)

Step 2: Calculate the test statistic

\(test\;statistic=\dfrac{sample\;statistic-null\;parameter}{standard\;error}\)

The sample statistic is the proportion in the original sample, 0.561. The null parameter is 0.50. And, the standard error is 0.024.

\(test\;statistic=\dfrac{0.561-0.50}{0.024}=\dfrac{0.061}{0.024}=2.542\)

Step 3: Determine the p value

The p value will be the area on the z distribution that is more extreme than the test statistic of 2.542, in the direction of the alternative hypothesis. This is a two-tailed test:

The p value is the area in the left and right tails combined: \(p=0.0055110+0.0055110=0.011022\)

Step 4: Make a decision

The p value (0.011022) is less than the standard 0.05 alpha level, therefore we reject the null hypothesis.

Step 5: State a "real world" conclusion

There is evidence that the proportion of all NFL overtime coin tosses that are won is different from 0.50

7.4.1.4 - Example: Proportion of Women Students

Research question : Are more than 50% of all World Campus STAT 200 students women?

Data were collected from a representative sample of 501 World Campus STAT 200 students. In that sample, 284 students were women and 217 were not women. 

StatKey was used to construct a sampling distribution using randomization methods:

Because this randomization distribution is approximately normal, we can find the p value by computing a standardized test statistic and using the z distribution.

The assumption here is that the sampling distribution is approximately normal. From the given StatKey output, the randomization distribution is approximately normal. 

\(H_0\colon p=0.50\) \(H_a\colon p>0.50\)

2. Calculate the test statistic

\(test\;statistic=\dfrac{sample\;statistic-hypothesized\;parameter}{standard\;error}\)

The sample statistic is \(\widehat p = 284/501 = 0.567\).

The hypothesized parameter is the value from the hypotheses: \(p_0=0.50\).

The standard error on the randomization distribution above is 0.022.

\(test\;statistic=\dfrac{0.567-0.50}{0.022}=3.045\)

3. Determine the p value

We can find the p value by constructing a standard normal distribution and finding the area under the curve that is more extreme than our observed test statistic of 3.045, in the direction of the alternative hypothesis. In other words, \(P(z>3.045)\):

Our p value is 0.0011634

4. Make a decision

Our p value is less than or equal to the standard 0.05 alpha level, therefore we reject the null hypothesis.

5. State a "real world" conclusion

There is evidence that the proportion of all World Campus STAT 200 students who are women is greater than 0.50.

7.4.1.5 - Example: Mean Quiz Score

Research question:  Is the mean quiz score different from 14 in the population?

\(H_0\colon \mu = 14\)

\(H_a\colon \mu \ne 14\)

The sample statistic is the mean in the original sample, 13.746 points. The null parameter is 14 points. And, the standard error, 0.142, can be found on the StatKey output.

\(test\;statistic=\dfrac{13.746-14}{0.142}=\dfrac{-0.254}{0.142}=-1.789\)

The p value will be the area on the z distribution that is more extreme than the test statistic of -1.789, in the direction of the alternative hypothesis:

This was a two-tailed test. The p value is the area in the left and right tails combined: \(p=0.0368074+0.0368074=0.0736148\)

The p value (0.0736148) is greater than the standard 0.05 alpha level, therefore we fail to reject the null hypothesis.

There is not enough evidence to state that the mean quiz score in the population is different from 14 points. 

7.4.1.6 - Example: Difference in Mean Commute Times

Research question:  Do the mean commute times in Atlanta and St. Louis differ in the population? 

 From the given StatKey output, the randomization distribution is approximately normal.

\(H_0: \mu_1-\mu_2=0\)

\(H_a: \mu_1 - \mu_2 \ne 0\)

Step 2: Compute the test statistic

\(test\;statistic=\dfrac{sample\;statistic - null \; parameter}{standard \;error}\)

The observed sample statistic is \(\overline x _1 - \overline x _2 = 7.14\). The null parameter is 0. And, the standard error, from the StatKey output, is 1.136.

\(test\;statistic=\dfrac{7.14-0}{1.136}=6.285\)

The p value will be the area on the z distribution that is more extreme than the test statistic of 6.285, in the direction of the alternative hypothesis:

This was a two-tailed test. The area in the two tailed combined is 0.000000. Theoretically, the p value cannot be 0 because there is always some chance that a Type I error was committed. This p value would be written as p < 0.001.

The p value is smaller than the standard 0.05 alpha level, therefore we reject the null hypothesis. 

There is evidence that the mean commute times in Atlanta and St. Louis are different in the population. 

hypothesis test distribution

  • The Open University
  • Accessibility hub
  • Guest user / Sign out
  • Study with The Open University

My OpenLearn Profile

Personalise your OpenLearn profile, save your favourite content and get recognition for your learning

About this free course

Become an ou student, download this course, share this free course.

Data analysis: hypothesis testing

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

4.1 The normal distribution

Here, you will look at the concept of normal distribution and the bell-shaped curve. The peak point (the top of the bell) represents the most probable occurrences, while other possible occurrences are distributed symmetrically around the peak point, creating a downward-sloping curve on either side of the peak point.

Cartoon showing a bell-shaped curve.

The cartoon shows a bell-shaped curve. The x-axis is titled ‘How high the hill is’ and the y-axis is titled ‘Number of hills’. The top of the bell-shaped curve is labelled ‘Average hill’, but on the lower right tail of the bell-shaped curve is labelled ‘Big hill’.

In order to test hypotheses, you need to calculate the test statistic and compare it with the value in the bell curve. This will be done by using the concept of ‘normal distribution’.

A normal distribution is a probability distribution that is symmetric about the mean, indicating that data near the mean are more likely to occur than data far from it. In graph form, a normal distribution appears as a bell curve. The values in the x-axis of the normal distribution graph represent the z-scores. The test statistic that you wish to use to test the set of hypotheses is the z-score . A z-score is used to measure how far the observation (sample mean) is from the 0 value of the bell curve (population mean). In statistics, this distance is measured by standard deviation. Therefore, when the z-score is equal to 2, the observation is 2 standard deviations away from the value 0 in the normal distribution curve.

A symmetrical graph reminiscent of a bell showing normal distribution.

A symmetrical graph reminiscent of a bell. The top of the bell-shaped curve appears where the x-axis is at 0. This is labelled as Normal distribution.

Previous

StatAnalytica

Step-by-step guide to hypothesis testing in statistics

hypothesis testing in statistics

Hypothesis testing in statistics helps us use data to make informed decisions. It starts with an assumption or guess about a group or population—something we believe might be true. We then collect sample data to check if there is enough evidence to support or reject that guess. This method is useful in many fields, like science, business, and healthcare, where decisions need to be based on facts.

Learning how to do hypothesis testing in statistics step-by-step can help you better understand data and make smarter choices, even when things are uncertain. This guide will take you through each step, from creating your hypothesis to making sense of the results, so you can see how it works in practical situations.

What is Hypothesis Testing?

Table of Contents

Hypothesis testing is a method for determining whether data supports a certain idea or assumption about a larger group. It starts by making a guess, like an average or a proportion, and then uses a small sample of data to see if that guess seems true or not.

For example, if a company wants to know if its new product is more popular than its old one, it can use hypothesis testing. They start with a statement like “The new product is not more popular than the old one” (this is the null hypothesis) and compare it with “The new product is more popular” (this is the alternative hypothesis). Then, they look at customer feedback to see if there’s enough evidence to reject the first statement and support the second one.

Simply put, hypothesis testing is a way to use data to help make decisions and understand what the data is really telling us, even when we don’t have all the answers.

Importance Of Hypothesis Testing In Decision-Making And Data Analysis

Hypothesis testing is important because it helps us make smart choices and understand data better. Here’s why it’s useful:

  • Reduces Guesswork : It helps us see if our guesses or ideas are likely correct, even when we don’t have all the details.
  • Uses Real Data : Instead of just guessing, it checks if our ideas match up with real data, which makes our decisions more reliable.
  • Avoids Errors : It helps us avoid mistakes by carefully checking if our ideas are right so we don’t make costly errors.
  • Shows What to Do Next : It tells us if our ideas work or not, helping us decide whether to keep, change, or drop something. For example, a company might test a new ad and decide what to do based on the results.
  • Confirms Research Findings : It makes sure that research results are accurate and not just random chance so that we can trust the findings.

Here’s a simple guide to understanding hypothesis testing, with an example:

1. Set Up Your Hypotheses

Explanation: Start by defining two statements:

  • Null Hypothesis (H0): This is the idea that there is no change or effect. It’s what you assume is true.
  • Alternative Hypothesis (H1): This is what you want to test. It suggests there is a change or effect.

Example: Suppose a company says their new batteries last an average of 500 hours. To check this:

  • Null Hypothesis (H0): The average battery life is 500 hours.
  • Alternative Hypothesis (H1): The average battery life is not 500 hours.

2. Choose the Test

Explanation: Pick a statistical test that fits your data and your hypotheses. Different tests are used for various kinds of data.

Example: Since you’re comparing the average battery life, you use a one-sample t-test .

3. Set the Significance Level

Explanation: Decide how much risk you’re willing to take if you make a wrong decision. This is called the significance level, often set at 0.05 or 5%.

Example: You choose a significance level of 0.05, meaning you’re okay with a 5% chance of being wrong.

4. Gather and Analyze Data

Explanation: Collect your data and perform the test. Calculate the test statistic to see how far your sample result is from what you assumed.

Example: You test 30 batteries and find they last an average of 485 hours. You then calculate how this average compares to the claimed 500 hours using the t-test.

5. Find the p-Value

Explanation: The p-value tells you the probability of getting a result as extreme as yours if the null hypothesis is true.

Example: You find a p-value of 0.0001. This means there’s a very small chance (0.01%) of getting an average battery life of 485 hours or less if the true average is 500 hours.

6. Make Your Decision

Explanation: Compare the p-value to your significance level. If the p-value is smaller, you reject the null hypothesis. If it’s larger, you do not reject it.

Example: Since 0.0001 is much less than 0.05, you reject the null hypothesis. This means the data suggests the average battery life is different from 500 hours.

7. Report Your Findings

Explanation: Summarize what the results mean. State whether you rejected the null hypothesis and what that implies.

Example: You conclude that the average battery life is likely different from 500 hours. This suggests the company’s claim might not be accurate.

Hypothesis testing is a way to use data to check if your guesses or assumptions are likely true. By following these steps—setting up your hypotheses, choosing the right test, deciding on a significance level, analyzing your data, finding the p-value, making a decision, and reporting results—you can determine if your data supports or challenges your initial idea.

Understanding Hypothesis Testing: A Simple Explanation

Hypothesis testing is a way to use data to make decisions. Here’s a straightforward guide:

1. What is the Null and Alternative Hypotheses?

  • Null Hypothesis (H0): This is your starting assumption. It says that nothing has changed or that there is no effect. It’s what you assume to be true until your data shows otherwise. Example: If a company says their batteries last 500 hours, the null hypothesis is: “The average battery life is 500 hours.” This means you think the claim is correct unless you find evidence to prove otherwise.
  • Alternative Hypothesis (H1): This is what you want to find out. It suggests that there is an effect or a difference. It’s what you are testing to see if it might be true. Example: To test the company’s claim, you might say: “The average battery life is not 500 hours.” This means you think the average battery life might be different from what the company says.

2. One-Tailed vs. Two-Tailed Tests

  • One-Tailed Test: This test checks for an effect in only one direction. You use it when you’re only interested in finding out if something is either more or less than a specific value. Example: If you think the battery lasts longer than 500 hours, you would use a one-tailed test to see if the battery life is significantly more than 500 hours.
  • Two-Tailed Test: This test checks for an effect in both directions. Use this when you want to see if something is different from a specific value, whether it’s more or less. Example: If you want to see if the battery life is different from 500 hours, whether it’s more or less, you would use a two-tailed test. This checks for any significant difference, regardless of the direction.

3. Common Misunderstandings

  • Clarification: Hypothesis testing doesn’t prove that the null hypothesis is true. It just helps you decide if you should reject it. If there isn’t enough evidence against it, you don’t reject it, but that doesn’t mean it’s definitely true.
  • Clarification: A small p-value shows that your data is unlikely if the null hypothesis is true. It suggests that the alternative hypothesis might be right, but it doesn’t prove the null hypothesis is false.
  • Clarification: The significance level (alpha) is a set threshold, like 0.05, that helps you decide how much risk you’re willing to take for making a wrong decision. It should be chosen carefully, not randomly.
  • Clarification: Hypothesis testing helps you make decisions based on data, but it doesn’t guarantee your results are correct. The quality of your data and the right choice of test affect how reliable your results are.

Benefits and Limitations of Hypothesis Testing

  • Clear Decisions: Hypothesis testing helps you make clear decisions based on data. It shows whether the evidence supports or goes against your initial idea.
  • Objective Analysis: It relies on data rather than personal opinions, so your decisions are based on facts rather than feelings.
  • Concrete Numbers: You get specific numbers, like p-values, to understand how strong the evidence is against your idea.
  • Control Risk: You can set a risk level (alpha level) to manage the chance of making an error, which helps avoid incorrect conclusions.
  • Widely Used: It can be used in many areas, from science and business to social studies and engineering, making it a versatile tool.

Limitations

  • Sample Size Matters: The results can be affected by the size of the sample. Small samples might give unreliable results, while large samples might find differences that aren’t meaningful in real life.
  • Risk of Misinterpretation: A small p-value means the results are unlikely if the null hypothesis is true, but it doesn’t show how important the effect is.
  • Needs Assumptions: Hypothesis testing requires certain conditions, like data being normally distributed . If these aren’t met, the results might not be accurate.
  • Simple Decisions: It often results in a basic yes or no decision without giving detailed information about the size or impact of the effect.
  • Can Be Misused: Sometimes, people misuse hypothesis testing, tweaking data to get a desired result or focusing only on whether the result is statistically significant.
  • No Absolute Proof: Hypothesis testing doesn’t prove that your hypothesis is true. It only helps you decide if there’s enough evidence to reject the null hypothesis, so the conclusions are based on likelihood, not certainty.

Final Thoughts 

Hypothesis testing helps you make decisions based on data. It involves setting up your initial idea, picking a significance level, doing the test, and looking at the results. By following these steps, you can make sure your conclusions are based on solid information, not just guesses.

This approach lets you see if the evidence supports or contradicts your initial idea, helping you make better decisions. But remember that hypothesis testing isn’t perfect. Things like sample size and assumptions can affect the results, so it’s important to be aware of these limitations.

In simple terms, using a step-by-step guide for hypothesis testing is a great way to better understand your data. Follow the steps carefully and keep in mind the method’s limits.

What is the difference between one-tailed and two-tailed tests?

 A one-tailed test assesses the probability of the observed data in one direction (either greater than or less than a certain value). In contrast, a two-tailed test looks at both directions (greater than and less than) to detect any significant deviation from the null hypothesis.

How do you choose the appropriate test for hypothesis testing?

The choice of test depends on the type of data you have and the hypotheses you are testing. Common tests include t-tests, chi-square tests, and ANOVA. You get more details about ANOVA, you may read Complete Details on What is ANOVA in Statistics ?  It’s important to match the test to the data characteristics and the research question.

What is the role of sample size in hypothesis testing?  

Sample size affects the reliability of hypothesis testing. Larger samples provide more reliable estimates and can detect smaller effects, while smaller samples may lead to less accurate results and reduced power.

Can hypothesis testing prove that a hypothesis is true?  

Hypothesis testing cannot prove that a hypothesis is true. It can only provide evidence to support or reject the null hypothesis. A result can indicate whether the data is consistent with the null hypothesis or not, but it does not prove the alternative hypothesis with certainty.

Related Posts

how-to-find-the=best-online-statistics-homework-help

How to Find the Best Online Statistics Homework Help

why-spss-homework-help-is-an-important-aspects-for-students

Why SPSS Homework Help Is An Important aspect for Students?

Leave a comment cancel reply.

Your email address will not be published. Required fields are marked *

Normal Hypothesis Testing ( AQA A Level Maths: Statistics )

Revision note.

Amber

Normal Hypothesis Testing

How is a hypothesis test carried out with the normal distribution.

  • The population mean is tested by looking at the mean of a sample taken from the population
  • A hypothesis test is used when the value of the assumed population mean is questioned
  • Make sure you clearly define µ before writing the hypotheses, if it has not been defined in the question
  • The null hypothesis will always be H 0 : µ = ...
  • The alternative hypothesis will depend on if it is a one-tailed or two-tailed test
  • The alternative hypothesis, H 1   will be H 1 :   µ > ... or  H 1 :   µ < ...
  • The alternative hypothesis, H 1   will be H 1 :   µ ≠ ..
  • Remember that the variance of the sample mean distribution will be the variance of the population distribution divided by n
  • the mean of the sample mean distribution will be the same as the mean of the population distribution
  • The normal distribution will be used to calculate the probability of the observed value of the test statistic taking the observed value or a more extreme value
  • either calculating the probability of the test statistic taking the observed or a more extreme value ( p – value ) and comparing this with the significance level
  • Finding the critical region can be more useful for considering more than one observed value or for further testing

How is the critical value found in a hypothesis test for the mean of a normal distribution?

  • The probability of the observed value being within the critical region, given a true null hypothesis will be the same as the significance level
  • To find the critical value(s) find the distribution of the sample means, assuming H 0 is true, and use the inverse normal function on your calculator
  • For a two-tailed test you will need to find both critical values, one at each end of the distribution

What steps should I follow when carrying out a hypothesis test for the mean of a normal distribution?

  • Following these steps will help when carrying out a hypothesis test for the mean of a normal distribution:

Step 2.  Write the null and alternative hypotheses clearly using the form

H 0 : μ = ...

H 1 : μ ... ...

Step 4.    Calculate either the critical value(s) or the p – value (probability of the observed value) for the test

Step 5.    Compare the observed value of the test statistic with the critical value(s) or the p - value with the significance level

Step 6.    Decide whether there is enough evidence to reject H 0 or whether it has to be accepted

Step 7.  Write a conclusion in context

Worked example

5-3-2-hypothesis-nd-we-solution-part-1

You've read 0 of your 10 free revision notes

Unlock more, it's free, join the 100,000 + students that ❤️ save my exams.

the (exam) results speak for themselves:

Did this page help you?

  • Hypothesis Testing (Normal Distribution) (A Level only)
  • Sampling & Data Collection
  • Statistical Measures
  • Data Presentation
  • Working with Data
  • Correlation & Regression
  • Further Correlation & Regression (A Level only)
  • Basic Probability
  • Further Probability (A Level only)
  • Probability Distributions

Author: Amber

Amber gained a first class degree in Mathematics & Meteorology from the University of Reading before training to become a teacher. She is passionate about teaching, having spent 8 years teaching GCSE and A Level Mathematics both in the UK and internationally. Amber loves creating bright and informative resources to help students reach their potential.

Hypothesis Testing

Hypothesis testing is a tool for making statistical inferences about the population data. It is an analysis tool that tests assumptions and determines how likely something is within a given standard of accuracy. Hypothesis testing provides a way to verify whether the results of an experiment are valid.

A null hypothesis and an alternative hypothesis are set up before performing the hypothesis testing. This helps to arrive at a conclusion regarding the sample obtained from the population. In this article, we will learn more about hypothesis testing, its types, steps to perform the testing, and associated examples.

1.
2.
3.
4.
5.
6.
7.
8.

What is Hypothesis Testing in Statistics?

Hypothesis testing uses sample data from the population to draw useful conclusions regarding the population probability distribution . It tests an assumption made about the data using different types of hypothesis testing methodologies. The hypothesis testing results in either rejecting or not rejecting the null hypothesis.

Hypothesis Testing Definition

Hypothesis testing can be defined as a statistical tool that is used to identify if the results of an experiment are meaningful or not. It involves setting up a null hypothesis and an alternative hypothesis. These two hypotheses will always be mutually exclusive. This means that if the null hypothesis is true then the alternative hypothesis is false and vice versa. An example of hypothesis testing is setting up a test to check if a new medicine works on a disease in a more efficient manner.

Null Hypothesis

The null hypothesis is a concise mathematical statement that is used to indicate that there is no difference between two possibilities. In other words, there is no difference between certain characteristics of data. This hypothesis assumes that the outcomes of an experiment are based on chance alone. It is denoted as \(H_{0}\). Hypothesis testing is used to conclude if the null hypothesis can be rejected or not. Suppose an experiment is conducted to check if girls are shorter than boys at the age of 5. The null hypothesis will say that they are the same height.

Alternative Hypothesis

The alternative hypothesis is an alternative to the null hypothesis. It is used to show that the observations of an experiment are due to some real effect. It indicates that there is a statistical significance between two possible outcomes and can be denoted as \(H_{1}\) or \(H_{a}\). For the above-mentioned example, the alternative hypothesis would be that girls are shorter than boys at the age of 5.

Hypothesis Testing P Value

In hypothesis testing, the p value is used to indicate whether the results obtained after conducting a test are statistically significant or not. It also indicates the probability of making an error in rejecting or not rejecting the null hypothesis.This value is always a number between 0 and 1. The p value is compared to an alpha level, \(\alpha\) or significance level. The alpha level can be defined as the acceptable risk of incorrectly rejecting the null hypothesis. The alpha level is usually chosen between 1% to 5%.

Hypothesis Testing Critical region

All sets of values that lead to rejecting the null hypothesis lie in the critical region. Furthermore, the value that separates the critical region from the non-critical region is known as the critical value.

Hypothesis Testing Formula

Depending upon the type of data available and the size, different types of hypothesis testing are used to determine whether the null hypothesis can be rejected or not. The hypothesis testing formula for some important test statistics are given below:

  • z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\). \(\overline{x}\) is the sample mean, \(\mu\) is the population mean, \(\sigma\) is the population standard deviation and n is the size of the sample.
  • t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\). s is the sample standard deviation.
  • \(\chi ^{2} = \sum \frac{(O_{i}-E_{i})^{2}}{E_{i}}\). \(O_{i}\) is the observed value and \(E_{i}\) is the expected value.

We will learn more about these test statistics in the upcoming section.

Types of Hypothesis Testing

Selecting the correct test for performing hypothesis testing can be confusing. These tests are used to determine a test statistic on the basis of which the null hypothesis can either be rejected or not rejected. Some of the important tests used for hypothesis testing are given below.

Hypothesis Testing Z Test

A z test is a way of hypothesis testing that is used for a large sample size (n ≥ 30). It is used to determine whether there is a difference between the population mean and the sample mean when the population standard deviation is known. It can also be used to compare the mean of two samples. It is used to compute the z test statistic. The formulas are given as follows:

  • One sample: z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\).
  • Two samples: z = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}\).

Hypothesis Testing t Test

The t test is another method of hypothesis testing that is used for a small sample size (n < 30). It is also used to compare the sample mean and population mean. However, the population standard deviation is not known. Instead, the sample standard deviation is known. The mean of two samples can also be compared using the t test.

  • One sample: t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\).
  • Two samples: t = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{s_{1}^{2}}{n_{1}}+\frac{s_{2}^{2}}{n_{2}}}}\).

Hypothesis Testing Chi Square

The Chi square test is a hypothesis testing method that is used to check whether the variables in a population are independent or not. It is used when the test statistic is chi-squared distributed.

One Tailed Hypothesis Testing

One tailed hypothesis testing is done when the rejection region is only in one direction. It can also be known as directional hypothesis testing because the effects can be tested in one direction only. This type of testing is further classified into the right tailed test and left tailed test.

Right Tailed Hypothesis Testing

The right tail test is also known as the upper tail test. This test is used to check whether the population parameter is greater than some value. The null and alternative hypotheses for this test are given as follows:

\(H_{0}\): The population parameter is ≤ some value

\(H_{1}\): The population parameter is > some value.

If the test statistic has a greater value than the critical value then the null hypothesis is rejected

Right Tail Hypothesis Testing

Left Tailed Hypothesis Testing

The left tail test is also known as the lower tail test. It is used to check whether the population parameter is less than some value. The hypotheses for this hypothesis testing can be written as follows:

\(H_{0}\): The population parameter is ≥ some value

\(H_{1}\): The population parameter is < some value.

The null hypothesis is rejected if the test statistic has a value lesser than the critical value.

Left Tail Hypothesis Testing

Two Tailed Hypothesis Testing

In this hypothesis testing method, the critical region lies on both sides of the sampling distribution. It is also known as a non - directional hypothesis testing method. The two-tailed test is used when it needs to be determined if the population parameter is assumed to be different than some value. The hypotheses can be set up as follows:

\(H_{0}\): the population parameter = some value

\(H_{1}\): the population parameter ≠ some value

The null hypothesis is rejected if the test statistic has a value that is not equal to the critical value.

Two Tail Hypothesis Testing

Hypothesis Testing Steps

Hypothesis testing can be easily performed in five simple steps. The most important step is to correctly set up the hypotheses and identify the right method for hypothesis testing. The basic steps to perform hypothesis testing are as follows:

  • Step 1: Set up the null hypothesis by correctly identifying whether it is the left-tailed, right-tailed, or two-tailed hypothesis testing.
  • Step 2: Set up the alternative hypothesis.
  • Step 3: Choose the correct significance level, \(\alpha\), and find the critical value.
  • Step 4: Calculate the correct test statistic (z, t or \(\chi\)) and p-value.
  • Step 5: Compare the test statistic with the critical value or compare the p-value with \(\alpha\) to arrive at a conclusion. In other words, decide if the null hypothesis is to be rejected or not.

Hypothesis Testing Example

The best way to solve a problem on hypothesis testing is by applying the 5 steps mentioned in the previous section. Suppose a researcher claims that the mean average weight of men is greater than 100kgs with a standard deviation of 15kgs. 30 men are chosen with an average weight of 112.5 Kgs. Using hypothesis testing, check if there is enough evidence to support the researcher's claim. The confidence interval is given as 95%.

Step 1: This is an example of a right-tailed test. Set up the null hypothesis as \(H_{0}\): \(\mu\) = 100.

Step 2: The alternative hypothesis is given by \(H_{1}\): \(\mu\) > 100.

Step 3: As this is a one-tailed test, \(\alpha\) = 100% - 95% = 5%. This can be used to determine the critical value.

1 - \(\alpha\) = 1 - 0.05 = 0.95

0.95 gives the required area under the curve. Now using a normal distribution table, the area 0.95 is at z = 1.645. A similar process can be followed for a t-test. The only additional requirement is to calculate the degrees of freedom given by n - 1.

Step 4: Calculate the z test statistic. This is because the sample size is 30. Furthermore, the sample and population means are known along with the standard deviation.

z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\).

\(\mu\) = 100, \(\overline{x}\) = 112.5, n = 30, \(\sigma\) = 15

z = \(\frac{112.5-100}{\frac{15}{\sqrt{30}}}\) = 4.56

Step 5: Conclusion. As 4.56 > 1.645 thus, the null hypothesis can be rejected.

Hypothesis Testing and Confidence Intervals

Confidence intervals form an important part of hypothesis testing. This is because the alpha level can be determined from a given confidence interval. Suppose a confidence interval is given as 95%. Subtract the confidence interval from 100%. This gives 100 - 95 = 5% or 0.05. This is the alpha value of a one-tailed hypothesis testing. To obtain the alpha value for a two-tailed hypothesis testing, divide this value by 2. This gives 0.05 / 2 = 0.025.

Related Articles:

  • Probability and Statistics
  • Data Handling

Important Notes on Hypothesis Testing

  • Hypothesis testing is a technique that is used to verify whether the results of an experiment are statistically significant.
  • It involves the setting up of a null hypothesis and an alternate hypothesis.
  • There are three types of tests that can be conducted under hypothesis testing - z test, t test, and chi square test.
  • Hypothesis testing can be classified as right tail, left tail, and two tail tests.

Examples on Hypothesis Testing

  • Example 1: The average weight of a dumbbell in a gym is 90lbs. However, a physical trainer believes that the average weight might be higher. A random sample of 5 dumbbells with an average weight of 110lbs and a standard deviation of 18lbs. Using hypothesis testing check if the physical trainer's claim can be supported for a 95% confidence level. Solution: As the sample size is lesser than 30, the t-test is used. \(H_{0}\): \(\mu\) = 90, \(H_{1}\): \(\mu\) > 90 \(\overline{x}\) = 110, \(\mu\) = 90, n = 5, s = 18. \(\alpha\) = 0.05 Using the t-distribution table, the critical value is 2.132 t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\) t = 2.484 As 2.484 > 2.132, the null hypothesis is rejected. Answer: The average weight of the dumbbells may be greater than 90lbs
  • Example 2: The average score on a test is 80 with a standard deviation of 10. With a new teaching curriculum introduced it is believed that this score will change. On random testing, the score of 38 students, the mean was found to be 88. With a 0.05 significance level, is there any evidence to support this claim? Solution: This is an example of two-tail hypothesis testing. The z test will be used. \(H_{0}\): \(\mu\) = 80, \(H_{1}\): \(\mu\) ≠ 80 \(\overline{x}\) = 88, \(\mu\) = 80, n = 36, \(\sigma\) = 10. \(\alpha\) = 0.05 / 2 = 0.025 The critical value using the normal distribution table is 1.96 z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\) z = \(\frac{88-80}{\frac{10}{\sqrt{36}}}\) = 4.8 As 4.8 > 1.96, the null hypothesis is rejected. Answer: There is a difference in the scores after the new curriculum was introduced.
  • Example 3: The average score of a class is 90. However, a teacher believes that the average score might be lower. The scores of 6 students were randomly measured. The mean was 82 with a standard deviation of 18. With a 0.05 significance level use hypothesis testing to check if this claim is true. Solution: The t test will be used. \(H_{0}\): \(\mu\) = 90, \(H_{1}\): \(\mu\) < 90 \(\overline{x}\) = 110, \(\mu\) = 90, n = 6, s = 18 The critical value from the t table is -2.015 t = \(\frac{\overline{x}-\mu}{\frac{s}{\sqrt{n}}}\) t = \(\frac{82-90}{\frac{18}{\sqrt{6}}}\) t = -1.088 As -1.088 > -2.015, we fail to reject the null hypothesis. Answer: There is not enough evidence to support the claim.

go to slide go to slide go to slide

hypothesis test distribution

Book a Free Trial Class

FAQs on Hypothesis Testing

What is hypothesis testing.

Hypothesis testing in statistics is a tool that is used to make inferences about the population data. It is also used to check if the results of an experiment are valid.

What is the z Test in Hypothesis Testing?

The z test in hypothesis testing is used to find the z test statistic for normally distributed data . The z test is used when the standard deviation of the population is known and the sample size is greater than or equal to 30.

What is the t Test in Hypothesis Testing?

The t test in hypothesis testing is used when the data follows a student t distribution . It is used when the sample size is less than 30 and standard deviation of the population is not known.

What is the formula for z test in Hypothesis Testing?

The formula for a one sample z test in hypothesis testing is z = \(\frac{\overline{x}-\mu}{\frac{\sigma}{\sqrt{n}}}\) and for two samples is z = \(\frac{(\overline{x_{1}}-\overline{x_{2}})-(\mu_{1}-\mu_{2})}{\sqrt{\frac{\sigma_{1}^{2}}{n_{1}}+\frac{\sigma_{2}^{2}}{n_{2}}}}\).

What is the p Value in Hypothesis Testing?

The p value helps to determine if the test results are statistically significant or not. In hypothesis testing, the null hypothesis can either be rejected or not rejected based on the comparison between the p value and the alpha level.

What is One Tail Hypothesis Testing?

When the rejection region is only on one side of the distribution curve then it is known as one tail hypothesis testing. The right tail test and the left tail test are two types of directional hypothesis testing.

What is the Alpha Level in Two Tail Hypothesis Testing?

To get the alpha level in a two tail hypothesis testing divide \(\alpha\) by 2. This is done as there are two rejection regions in the curve.

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

How to Identify the Distribution of Your Data

By Jim Frost 112 Comments

You’re probably familiar with data that follow the normal distribution. The normal distribution is that nice, familiar bell-shaped curve. Unfortunately, not all data are normally distributed or as intuitive to understand. You can picture the symmetric normal distribution, but what about the Weibull or Gamma distributions? This uncertainty might leave you feeling unsettled. In this post, I show you how to identify the probability distribution of your data.

You might think of nonnormal data as abnormal. However, in some areas, you should actually expect nonnormal distributions. For instance, income data are typically right skewed. If a process has a natural limit, data tend to skew away from the limit. For example, purity can’t be greater than 100%, which might cause the data to cluster near the upper limit and skew left towards lower values. On the other hand, drill holes can’t be smaller than the drill bit. The sizes of the drill holes might be right-skewed away from the minimum possible size.

Data that follow any probability distribution can be valuable. However, many people don’t feel as comfortable with nonnormal data. Let’s shed light on how to identify the distribution of your data!

We’ll learn how to identify the probability distribution using body fat percentage data from middle school girls that I collected during an experiment. You can download the CSV data file: body_fat .

Related posts : Understanding Probability Distributions  and The Normal Distribution

Graph the Raw Data

Let’s plot the raw data to see what it looks like.

Histogram displays a right skewed distribution for the body fat data. We want to identify the distribution of these data.

The histogram gives us a good overview of the data. At a glance, we can see that these data clearly are not normally distributed. They are right skewed. The peak is around 27%, and the distribution extends further into the higher values than to the lower values. Learn more about skewed distributions . Histograms can also identify bimodal distributions .

These data are not normal, but which probability distribution do they follow? Fortunately, statistical software can help us!

Related posts : Using Histograms to Understand Your Data , Dot Plots: Using, Examples, and Interpreting , and Assessing Normality: Histograms vs. Normal Probability Plots

Using Distribution Tests to Identify the Probability Distribution that Your Data Follow

Distribution goodness-of-fit tests are hypothesis tests that determine whether your sample data were drawn from a population that follows a hypothesized probability distribution. Like any statistical hypothesis test , distribution tests have a null hypothesis and an alternative hypothesis.

  • H 0 : The sample data follow the hypothesized distribution.
  • H 1 : The sample data do not follow the hypothesized distribution.

For distribution goodness-of-fit tests, small p-values indicate that you can reject the null hypothesis and conclude that your data were not drawn from a population with the specified distribution. However, we want to identify the probability distribution that our data follow rather than the distributions they don’t follow! Consequently, distribution tests are a rare case where you look for high p-values to identify candidate distributions. Learn more about Goodness of Fit: Definition & Tests .

Before we test our data to identify the distribution, here are some measures you need to know:

Anderson-Darling statistic (AD): There are different distribution tests. The test I’ll use for our data is the Anderson-Darling test. The Anderson-Darling statistic is the test statistic. It’s like the t-value for t-tests or the F-value for F-tests . Typically, you don’t interpret this statistic directly, but the software uses it to calculate the p-value for the test.

P-value: Distribution tests that have high p-values are suitable candidates for your data’s distribution. Unfortunately, it is not possible to calculate p-values for some distributions with three parameters.

LRT P: If you are considering a three-parameter distribution, assess the LRT P to determine whether the third parameter significantly improves the fit compared to the associated two-parameter distribution. An LRT P value that is less than your significance level indicates a significant improvement over the two-parameter distribution. If you see a higher value, consider staying with the two-parameter distribution.

Note that this example covers continuous data. For categorical and discrete variables, you should use the chi-square goodness of fit test .

Goodness of Fit Test Results for the Distribution Tests

I’m using Minitab, which can test 14 probability distributions and two transformations all at once. Let’s take a look at the output below. We’re looking for higher p-values in the Goodness-of-Fit Test table below.

Table of goodness-of-fit results for the distribution tests. The top candidates are highlighted.

As we expected, the Normal distribution does not fit the data. The p-value is less than 0.005, which indicates that we can reject the null hypothesis that these data follow the normal distribution.

The Box-Cox transformation and the Johnson transformation both have high p-values. If we need to transform our data to follow the normal distribution, the high p-values indicate that we can use these transformations successfully. However, we’ll disregard the transformations because we want to identify our probability distribution rather than transform it.

The highest p-value is for the three-parameter Weibull distribution (>0.500). For the three-parameter Weibull, the LRT P is significant (0.000), which means that the third parameter significantly improves the fit.

The lognormal distribution has the next highest p-value of 0.345.

Let’s consider the three-parameter Weibull distribution and lognormal distribution to be our top two candidates.

Related post : Understanding the Weibull Distribution

Using Probability Plots to Identify the Distribution of Your Data

Probability plots might be the best way to determine whether your data follow a particular distribution. If your data follow the straight line on the graph, the distribution fits your data. This process is simple to do visually. Informally, this process is called the “fat pencil” test. If all the data points line up within the area of a fat pencil laid over the center straight line, you can conclude that your data follow the distribution.

Probability plots are also known as quantile-quantile plots, or Q-Q plots. These plots are similar to Empirical CDF plots except that they transform the axes so the fitted distribution follows a straight line.

Q-Q plots are especially useful in cases where the distribution tests are too powerful. Distribution tests are like other hypothesis tests. As the sample size increases, the statistical power of the test also increases. With very large sample sizes, the test can have so much power that trivial departures from the distribution produce statistically significant results. In these cases, your p-value will be less than the significance level even when your data follow the distribution.

The solution is to assess Q-Q plots to identify the distribution of your data. If the data points fall along the straight line, you can conclude the data follow that distribution even if the p-value is statistically significant. Learn more about QQ Plots: Uses, Benefits & Interpreting .

The probability plots below include the normal distribution, our top two candidates, and the gamma distribution.

Probability plot the compares the fit of distributions to help us identify the distribution of our data.

The data points for the normal distribution don’t follow the center line. However, the data points do follow the line very closely for both the lognormal and the three-parameter Weibull distributions. The gamma distribution doesn’t follow the center line quite as well as the other two, and its p-value is lower. Again, it appears like the choice comes down to our top two candidates from before. How do we choose?

An Additional Consideration for Three-Parameter Distributions

Three-parameter distributions have a threshold parameter. The threshold parameter is also known as the location parameter. This parameter shifts the entire distribution left and right along the x-axis. The threshold/location parameter defines the smallest possible value in the distribution. You should use a three-parameter distribution only if the location truly is the lowest possible value. In other words, use subject-area knowledge to help you choose.

The threshold parameter for our data is 16.06038 (shown in the table below). This cutoff point defines the smallest value in the Weibull distribution. However, in the full population of middle school girls, it is unlikely that there is a strict cutoff at this value. Instead, lower values are possible even though they are less likely. Consequently, I’ll pick the lognormal distribution.

Related post : Understanding the Lognormal Distribution

Parameter Values for Our Distribution

We’ve identified our distribution as the lognormal distribution. Now, we need to find the parameter values for it. Population parameters are the values that define the shape and location of the distribution. We just need to look at the distribution parameters table below!

Table of estimated distribution parameters for a variety of distributions.

Our body fat percentage data for middle school girls follow a lognormal distribution with a location of 3.32317 and a scale of 0.24188.

Below, I created a probability distribution plot of our two top candidates using the parameter estimates. It displays the probability density functions for these distributions. You can see how the three-parameter Weibull distribution stops abruptly at the threshold/location value. However, the lognormal distribution continues to lower values.

Probability distribution plot that compares the three-parameter Weibull to the lognormal distribution to help us identify the distribution of our data.

Identifying the probability distribution that your data follow can be critical for analyses that are very sensitive to the distribution, such as capability analysis. In a future blog post, I’ll show you what else you can do by simply knowing the distribution of your data. This post is all continuous data and continuous probability distributions. If you have discrete data, read my post about Goodness-of-Fit Tests for Discrete Distributions .

Finally, I’ll close this post with a graph that compares the raw data to the fitted distribution that we identified.

Histogram that compares the raw data to the lognormal distribution that we identified.

Note: I wrote a different version of this post that appeared elsewhere. I’ve completely rewritten and updated it for my blog site.

Share this:

hypothesis test distribution

Reader Interactions

' src=

August 22, 2024 at 10:25 am

If I have data and I am told in a question what the 50th percentile is what the 75th percentile is and what the 95th percentile is how can I then identify it’s distribution?

' src=

April 20, 2024 at 9:02 pm

if my data is very big how i can distribute and what is the techniques. plz briefly describe?

' src=

April 20, 2024 at 9:44 pm

Hi Kalywan,

It’s not entirely clear what you’re asking exactly. I think you’re asking about how to identify the distribution of your data when you have a very large dataset.

So, a refresher, the problem with using distribution tests to identify the distribution of a very large dataset is that they’ll have very high statistical power and they’ll find trivial departures from a distribution to be statistically significant. That makes it hard to identify it!

In those cases, I recommend using QQ Plots . Click the link to learn more.

' src=

December 14, 2023 at 5:24 am

Hello How to know the distribution when we are dealing with censored data Please let me know Thank You

December 14, 2023 at 5:23 am

Hello My data is data [6,10,3,2,16,1,17,11,4,5] and I want to know the distribution and estimate the parameters for this. As these points are completely random and they don’t seem to follow any particular distribution, I tried using non-parametric method (KDE) and obtained the KDE plot but not able to understand how to interpret that plot and how to proceed with estimating the parameters from here. Please help me proceed further with this or if there is any other way or method to deal with this problem. Please let me know Thank You

' src=

April 19, 2023 at 4:44 am

Hello Jim, How can a test in SAS for different distributions ? like you are doing in minitab . thank you very much

' src=

September 29, 2022 at 7:39 pm

Hi Jim, thanks for your answer!!!

Yes my data is of purity and the UCL is 98.5 bacause this is the acceptance limit, values lower than 98.5 are not expected and they must have a justification of why that happened and in the data set I have 60 values that are lower than the UCL and most of them are higher than 97, around 40 samples, and only 9 are lower than 95 being 84 the lowest value.

If a remove this data that are justified of my analysis, would it bias the analyses?

About the p-value you are right, it was a typo I am looking for p-value greater than 0.05

I checked de probability plots and none of them were anywhere close to follow a straight line. I guess this is happening because of the high skewness.

If you could help with this problem, I would apreciate it so much.

September 29, 2022 at 8:22 pm

Hi Gustavo,

Ah, so that’s NOT an Upper Control Limit (UCL) then. It’s actually the LOWER control limit or LCL. That’s what confused me.

Assuming that the data below the LCL is correct—that is, they are out of spec, but the measurement is valid—then you should leave them in your dataset. However, if you have reason to believe that the measurements themselves are not valid due to some error, then you can take them out. But if they’re valid, they represent part of the distribution of outcomes and you should leave them in.

The skewness by itself isn’t the problem because some probability distributions can fit skewed data. It’s probably the specific shape of the skewness that is cause problems. The probability plots transform the axes so even skewed data can follow the straight line on the graph. They don’t have to follow the line perfectly. Do the “fat pencil test” that I describe here (I’m talking normal distributions there, but it also applies to probability (Q-Q) plots for other distributions).

I’m assuming that you also checked to see if any transformations can make your data normal? If not, look for that. But I’m guessing you did because it’s right there in the output with the other distributions.

Also, did you check to be sure that your data are in statistical control using a control chart? Out of control data can cause problems like this.

If all the above checks out, then it gets tougher.

I’d do some research and see what others in a similar subject area have done. Someone else might have figured it out!

If that doesn’t work, you might need to look into other methods. These methods I’ve heard of but I’m not overly familiar with. These would be things like nonparametric or bootstrapped capability analysis. Those methods should be able to handle data that don’t have an identifiable distribution. Unfortunately, Minitab can’t do those types.

Unfortunately, that’s all I’ve got! Hopefully, one of those suggestions work.

September 28, 2022 at 7:33 pm

Hi, i read your post and it was very helpful, but I am still having some troubles while analyzing my data. My data set is from process yield in % and the closer the to 100% the better, the data set has around 1100 samples and only 60 of them are smaller than 98,5, that is my UCL, so my data is highly skewed to left (skewness = -8) and I would like to run a capability test, but as I do not find a suitable distribution to my data set I think that the capability test may give some inconsistent results.

When I run a probabiliy distribution test in minitab, no distribution gives me a p-value greater than 0.005. So what should I do?

When I have a distribution that have a natural limit, as 100% is the max value I can get in a probability test, which approach should I have to treat or anylise the data?

September 29, 2022 at 6:45 pm

If close to 100% is better, why is the UCL at only 98.5%? Are these purity measurements by any chance? Those tend to be left-skewed when 100% is desired.

Also, just be clear, you state you’re looking for a p-value greater than 0.005, but that should be 0.05.

Here’s one possibility to consider. You have a very large sample size with n=1100. That gives these distribution tests very high statistical power. Consequently, they can detect trivial departures from any given probability distribution. Check the probability plots and see if they tend to follow the straight line. That’s the approach I recommend, particularly for very large samples like yours. I talk about that in this post in the section titled, “Using Probability Plots . . .”. If the dots follow the line fairly well, go with that distribution despite the p-value.

If you still can’t identify a distribution, let me know and I’ll think about other possibilities.

For capability analysis, choosing the distribution correct matters. Using the wrong one will definitely mess up the results! Capability analysis is sensitive to that.

' src=

August 29, 2022 at 3:04 pm

Hi Jim, It is interesting to see that Minitab tests 14 possible distributions! But as I understand it, there’s more than one “version” of any given distribution—for example a “normal” distribution is a bell-shaped curve, but may be slightly taller, or slightly wider and fatter than some other normal distribution curve, and still be “normal” within limits at least. My question is, in your body fat data for example, if you sampled body fat at a different school and you still got a lognormal curve but one that was wider and not quite as tall, would the probability of having a value between 20 and 24% (for example) still be the same? Or would it vary based on how squeezed or stretched your lognormal curve is (while still being lognormal)? Does the software compute this based on some ideal lognormal curve or use the actual data?

August 29, 2022 at 3:39 pm

That’s a great question.

The first thing that I’d point out is that there is not one Normal distribution or any other distribution. There are an infinite number of normal distributions. They share some characteristics, such as being symmetrical, having a single peak in the center, and tapers off equally both directions from the mean. However, they can be taller and narrower or shorter and wider. And have the majority of their values fall in entirely different places than other normal distributions. The same concept applies to lognormal and other distributions.

For this reason, I try to say things like, the data follow A normal distribution. Or A lognormal distribution. Rather than saying the data follow the normal or lognormal distribution because there isn’t one of each. Instead, the body fat percentage data follow a lognormal distribution with specific parameters.

To answer your question, yes, if I had taken a sample from another school, I would’ve likely gotten a slightly different distribution. It could’ve been squeeze or stretched a bit as you describe. That other lognormal distribution would’ve produced a somewhat different probability for values falling within that interval. Like many things we do in statistics, we’re using samples to estimate populations. A key notion in inferential statistics is that those estimates will vary from sample to sample. The quality of our estimates depends on how good our sample is. How large is it? Does it represent the population? Etc.

The software estimates these parameters using maximum likelihood estimation (MLE). Likelihood estimation is a process that calculates how likely a population with particular parameters is to produce a random sample with your sample’s properties. Maximizing that likelihood function simply means choosing the population parameters that are MOST likely to have produced your sample’s characteristics. It performs that process for all distributions. Then you need to determine which distribution best fits your data.

So, with this example, we end up with a lognormal distribution providing the best fit with specific parameters that were most likely to produce our sample.

' src=

August 22, 2022 at 7:03 am

I’ve read through you forum and I purchased two of your books – and everything is fantastic, thank you for all the great information!

However, I do have a question (which popped up during my scientific research) that I couldn’t seem to find an answer to: how do I interpret the results of OLS when the dependent variable has been transformed using a Box-Cox transformation? (It was necessary, since the residuals were extremely non-normal, but this fixed the issue)

More specifically, I’m looking to answer the following questions: 1) Do my independent variables have a significant effect on the dependent variable?

2) What’s the direction of the effect of my significant independent variables (positive/negative)?

3) What’s the order of my independent variables by strength of effect? (e.g.: Which independent variable has the strongest effect, and which one has the weakest effect?)

Please note, that I’m not trying to build a predictive model – I just want to know what the important independent variables are in my model, their direction of effect, and their ordinal strength (strongest – 2nd strongest – … – 2nd weakest – weakest). Also, when looking at their “ordinal strength” (my own words haha), I’m assuming correctly that I should be looking at their standardized coefficients, right? Or, for this purpose, is the normality of my residuals important at all? The significant independent variables do change after the Box-Cox transformation of the dependent variable, I just don’t know which model (transformed or untransformed DV) answers my research questions better… Sorry for the long post, keep up the good work! Thanks!

August 22, 2022 at 5:12 pm

Thanks for writing and thanks so much for buying two of my books. If you happen to have bought my regression book, go to Chapter 9 and look for a section titled, “Using Data Transformations to Fix Problems.” There’s a sub-section in it about “How to Interpret the Results for Transformed Data.” I think the entire section will be helpful for you but particularly the interpretation one.

In the transformations section, I note how transformations are my solution of last resort. They can fix problems but, as you’re finding, it complicates interpretation. So, you have non-normal residuals. Hopefully you’ve tried other solutions for fix that, such as specifying a better model. For example, one that properly models curvature. However, sometimes that’s just not possible. In that case, a transformation might be the best choice possible. Another option would be trying a generalized linear model that doesn’t necessarily require your residuals to follow a normal distribution but allows them to follow other distributions.

But back to transformations. If you’re stuck using a transformation, the results apply to the transformed data, and you need to describe them that way. For example, you might say there is a significant, positive relationship between the predictor and the Box-Cox transformed response variable. And in that case, the coefficients explain changes in the transformed response. It’s just not as intuitive understanding what the results mean. Some software can automatically back transform the numbers to help you understand some of it, but you’re not really seeing the true relationships.

Because you are developing a predictive model, many of these concerns are lessened for you because you don’t need to understand the explanatory roles of each variable. However, you will still need to back transform the predicted values. The predicted values you get “out of the box” will be in transformed units. Additionally, the margin of error (prediction intervals) might be drastically different depending on your predictor values. The transformation will make the transformed prediction intervals nice and constant, but that won’t necessarily be true for the back transformed PIs. So, you’ll need to back transform those too. Again, some software does that automatically. I know Minitab statistical software does that.

Understanding the predictive power of each predictor is complicated by the transformation because the standardized coefficients apply to the transformed data. In non-transformed models, you’re correct that standardized coefficients are a good measure to consider. Another good measure is the change in R-squared when the variable is added to the model last. However, the R-squared for your model applies to the transformed DV.

I guess for your overall question about how essential the transformation is to use, like many things in statistics, it depends on all the details. If your residuals are severely non-normal, then it’s important. However, if they’re only mildly non-normal, not so much. What I’d do is graph your residuals using a normal probability plot (aka a Q-Q plot) and use the “fat pencil test” I describe in the linked post. BTW, that post is about Q-Q plots and data distributions but apply to your residuals as well.

I hope that helps clarify some of the issues! Transformations can help matters but they do cause complications for interpretation.

' src=

July 25, 2022 at 1:09 pm

6) As a proxy for exposure to benzene (a known human carcinogen) you collect 30 samples (one sample from 30 individuals who work at an oil refinery) looking for phenol in the urine. The measure is usually reported as mg/g of creatinine. The mean concentration of all the samples was 252.5 mg/g of creatinine. This is worrying to you because you know that values above 250 indicate an overexposure to benzene. You look at the descriptive statistics and find that the standard deviation in the sample is 75, the range is 500 (2-502), and the interquartile range is 50 (57-107) a. Looking at the standard deviation, range, and IQR what do you suspect about the distribution of the data? b. What is the standard error of the mean for this sample? c. What is the 95% confidence interval of the mean? d. In your own words, what can you say about the sample you have collected with respect to the mean you have calculated, the 95% CI, and the levels at which we become concerned about overexposure (250mg/g creatinine).

July 25, 2022 at 7:10 pm

I’m not going to answer your homework question for you, but I’ll provide some suggestions and posts I’ve written that will help you answer them yourself. That’s the path to true learning!

One key thing you need to do is determine the general shape of your distribution. At the most basic level, that means determining whether it is symmetrical (e.g., normally distributed) or skewed. That’s easy if you have the data and can graph it. However, if you just have the summary statistics, you can still draw some conclusions. For tips on determining the shape of the distribution, read my post about Skewed Distributions . To help answer that, you’ll need to know what the median is and compare it to the mean. If the median is not provided, you know it falls somewhere within the IQR. I’ll give you the hint that you have reason to believe it is skewed and not normal. Or the dataset might contain one or more extreme outliers.

Read Standard error of the mean to see how to calculate and interpret it.

Learn how to use the SEM to calculate and interpret the 95% Confidence Interval .

By understanding the IQR and quartiles , you can determine what percentage of the sample is below 107 (the upper IQR value).

I hope that helps!

' src=

September 18, 2021 at 3:12 am

All (30) data points are 6.5 & 6.6. P-Value is <0.005 Individual distribution shows P-values as <0.005 & <0.010. How to choose distribution (non-normal) for calculating process capabilities? Below are the values for reference.

Goodness of Fit Test Distribution AD P LRT P Normal 12.101 <0.005 Box-Cox Transformation 12.101 <0.005 Lognormal 12.101 <0.005 Exponential 22.715 <0.003 2-Parameter Exponential 14.362 <0.010 0.000 Weibull 14.524 <0.010 Smallest Extreme Value 14.524 <0.010 Largest Extreme Value 11.028 <0.010 Gamma 12.246 <0.005 Logistic 11.973 <0.005 Loglogistic 11.973 <0.005

ML Estimates of Distribution Parameters Distribution Location Shape Scale Threshold Normal* 6.57600 0.04314 Box-Cox Transformation* 12302.42739 397.08665 Lognormal* 1.88341 0.00659 Exponential 6.57600 2-Parameter Exponential 0.07755 6.49845 Weibull 278.12723 6.59360 Smallest Extreme Value 6.59364 0.02355 Largest Extreme Value 6.55247 0.04785 Gamma 23582.81096 0.00028 Logistic 6.58577 0.02289 Loglogistic 1.88490 0.00350 * Scale: Adjusted ML estimate

Your response is much appreciated..

September 19, 2021 at 12:55 am

Hi Nishanth,

That’s a tough dataset you have! The p-values are all significant, which indicates that none of the distributions fit. However, I notice you don’t some of the distributions with more parameters (e.g., three parameter Weibull, two parameter exponential, etc.) You should check those. Also the Johnson transformation is not included.

If you can’t find any distribution that the data fit, or get a successful transformation, you might need a nonparametric approach. Or a bootstrapping approach. Unfortunately, your data just don’t follow any of the listed distributions!

' src=

August 25, 2021 at 2:50 pm

There is a mention that “The p-value is less than 0.005, which indicates that we can reject the null hypothesis that these data follow the normal distribution.”

Can the above mention be rephrased that if the p-value is greater than 0.005, it can make sure that the actual data follow the null hypothesis?

I would like to get to know the difference between the statements of “we can follow the null hypothesis” and “we failed to reject the null hypothesis”.

August 26, 2021 at 2:51 am

Thanks for writing with your great question!

First, I should clarify that correct cutoff value is 0.05. When the p-value is less than or equal to 0.05 for a normality test, we can reject the null hypothesis and conclude that the data do not follow a normal distribution.

Distribution tests are unusual for hypothesis tests. For almost all other tests, we want p-values to be low and significant, and draw conclusions when they are. However, for distribution tests, it’s a good sign when p-values are high. We fail to reject the null.

However, we never say that we accept the null. Why not? Well, it has to do with being unable to prove a negative. All we can say is that we have not seen evidence that the data do not follow the normal distribution. However, we can never prove that negative. Perhaps our sample size is too small to detect the difference or the data are too noisy? I write a post about this very issue that you should read: Failing to Reject the Null Hypothesis . That should help you understand why that is the correct wording!

' src=

August 11, 2021 at 11:16 am

How does we identify the data follow binomial or Poisson or other distribution rather than follow normal or not normal?

August 11, 2021 at 5:47 pm

Hi Gemechu,

That’s a great question. I’ve written a post that covers exactly that and I discuss both the binomial and Poisson distributions, along with others. Please read my post, Goodness-of-Fit Tests for Discrete Distributions .

' src=

May 29, 2021 at 8:31 am

Hi Jim, if a dataset has a skewness of -0.3, can we still consider it to be approximately normally distributed? Is the jacque bera test a good way to verify if the distribution of a dataset is ‘normal’? Thank you.

' src=

May 8, 2021 at 2:33 am

if my data is not following any distribution. can i say it is approximately following a Weibull distribution using the probability plot? if yes, can you share any reference document.

May 9, 2021 at 9:15 pm

Hi Mounika,

If your data are not following any distribution, I’m not sure why you’d be able to say it’s following a Weibull distribution? Are you saying that the p-value is significant but the dots on the probability plot following the straight line? It’s hard to tell from what you wrote. If that’s the case, you can conclude that the data follow the distribution. That usually happens when you have a large dataset.

' src=

April 16, 2021 at 9:10 pm

if the continuous data fits other distribution type than the normal distribution, say weibull making it a parametric test, can we do anova similarly like the normal distribution?

April 16, 2021 at 11:47 pm

Hi Shamshul,

Generally speaking, when we are talking about parametric tests, they assume that the data follow the normal distribution specifically. There are exceptions, but ANOVA does assume normality. However, when your data exceed a certain sample size, these analyses are valid with nonnormal data. For more information about this and a table with the sample sizes, please see my post about parametric vs. nonparametric analyses . I include ANOVA in that table.

' src=

January 28, 2021 at 10:57 am

hope you are doing great. In your hypothesis test ebook, you clearly expressed the no need to worry about the normality assumption provided the data is large. Now I see you emphasising the need to determine the distribution of the data. Understand what circumstances do I need to determine the distribution of my data so that I can make transformations before proceeding to hypothesis testing.

In other words, when do I have to over mind about the normality of my data?

Its because after reading your ebook, I clearly noticed that normality is not a big issue I should pay attention to when my sample data is huge.

January 28, 2021 at 11:32 pm

Hi Collinz,

You’re quite right that many hypothesis tests don’t require the data to follow a normal distribution when you have a large enough sample. And, an important note, the sample size doesn’t have to be huge. Often you don’t need more than 15-20 observations per group to be able to waive the normality assumption. At any rate, on to answering your question!

There are other situations where knowing the distribution is crucial. Often these are situations where you want to determine probabilities of outcomes falling within particular ranges. For example, capability analysis determines a process’s capability of producing parts that fall within the spec limits. Or, perhaps you want to calculate percentiles for you data using the probability distribution function. In these cases, you need to know which distribution best fits your data. In fact, it’ll often be obvious that the data don’t follow the normal distribution (as with the data in this example) and then the next step becomes determining which distribution your data follow.

Thanks for the great question! And, I hope that helps clarify it.

' src=

December 15, 2020 at 1:04 pm

Hi Jim i have a question Why do we need other contionous distributions if everything just converge to normal why we need to define other distributions

December 15, 2020 at 2:46 pm

Hi Eli/Asya,

Continuous distributions don’t necessarily converge to normality. As I describe in this post, some continuous distributions are naturally nonnormal. Gathering larger and larger samples for these inherently nnnnormal distributions won’t produce a normal distribution.

I think you’re referring to the central limit theorem. This theorem states that sampling distributions of the mean will approximate the normal distribution even when the population distribution is not normal. The fact that this occurs is very helpful in allowing you use to use some hypothesis tests even when distribution of values is not normal. For more information, read my post about the central limit theorem .

However, sometimes you need to understand the properties of the distribution of values and not the sampling distribution, which are very different things. Consequently, there are occasions when you need to identify the distribution of your data!

' src=

September 23, 2020 at 10:01 am

Thanks Jim for wonderful article . I am new to DS field am trying to find ways to proceed on a project that I am working on . I have a dataset say X which is actually the number of hits our website receives , captured everyhour ( shall we call it independent variable ?) . I also have Y1,Y2 which are the dependent variables . Here Y1 is the CPU utilization , Y2 is the Memory utilization of our servers . My objective is to calculate the expected CPU , Memory utilization of , say next month in relation to the volume , X , we receive . When I plot X ( I am unable paste the picture here ) it shows a proper daily and weekly seasonality . In a day the graph rises to a max peak around 11 am and drips down , and again reaches another peak around 2 pm . So its kind of a two bell curves in a day . This pattern repeats day after day …Also the curves are similar on a weekday and weekends . Now I used fbprophet to do the forecast of X using past values of X .

Also the Y1 the CPU values make similar patterns – I am able to plot Y1 also and forecast using fbprophet . However I am in a situation where is I need to find out the exact correlation between X and Y1 and Y2 . Also the correlation between Y1 & Y2 itself and combinations there of these 3 … I tried add_aggressor() method of fbprophet to influence the forecast of Y1 and Y2 . The resulting forecast values are much closer to the actuals ( training data ) . However I am not convinced with this approach . I need to mathematically derive the correlation between X and Y1 , X and Y2 , Y1 and Y2,, X and Y1 and Y2 . I checked pearson correlation the number is positive 0.025 between X and Y1 . I tried ANOVA with excel it shows negative -1.025 ( it says CPU is inversely correlated to Volume ..) this is unbelievable because I expect a positive correlation only between X and Y1 …I did Granger casuality and it says X preceds Y1 which means my hypothesis that “Volume contributes to CPU ” is true … I am wondering how I can use a kind of moment generating function to exactly forecast or calculate values of Y1 , Y2 WITHOUT using forecasting models like ARIMA etc . I need to be able to calculate , with least error margin , the value of Y1 , Y2 given value of X …. Please advise me the best approach I need to take . Thanks in advance DB [email protected]

' src=

September 14, 2020 at 4:49 pm

I have been stuck on a very important project for a long time knowing in the back of my mind if I just could know what type of distribution a data set i have came from, I could make leaps and bounds worth of progress as a result. I’m so glad I finally googled this problem directly and came upon this article.

I can’t stress enough how valuable this blogpost is to what I’m working on. Thank you so much Jim.

' src=

September 3, 2020 at 4:04 am

thank you for your input. I am wondering what does it mean if I have a distribution where mean and standard deviation are really close together. Is this indication of something? The data is exponentially distributed.

' src=

August 25, 2020 at 5:30 pm

Thank you very much for your post! It helped me and a lot of other people out a lot!

' src=

August 21, 2020 at 8:17 am

It definitely helped! I appreciate your detailed answer. Through it and the links provided I even managed to work out a couple follow-up questions i was ruminating!

Since i started meddling with statistics i’ve been under the impression that the hard part is to develop the mindset to appropriately understand the results… without it one tend to just “believe” in the numbers. I thank you kindly for helping me understand.

Keep up the good work!

August 21, 2020 at 11:30 am

I’m so glad to hear that! It’s easy to just go with the numbers. You learning how it all works is fantastic. I always think subject-area knowledge plus enough statistical knowledge to understand what the numbers are telling you, plus their limitations, is a crucial combination! Always glad to help!

August 20, 2020 at 11:54 am

I appreciate your detailed response, and the links provided allowed me to work out a couple of follow up questions!

Since i started meddling with statistics and (theoretically) learned to use the tools i required, i felt it takes time and practice to get the mindset needed to properly understand statistic results… and by default one tends to “believe” the numbers instead of understanding them! Thank you kindly for the attention.

August 19, 2020 at 4:09 pm

Thank you very much for your blog. Since I found it i know where to search if i’m in dire need of statistical enlightenment!

I just noticed this article and left me wondering… If the best fit distribution is chosen based on the one that has higher p-value, doesn’t it mean we’re accepting the null hypothesis? This aspect of the goodness of fit tests always puzzled me.

I’ve skimmed through the comments and you address this somehow, indicating that, technically, with high p-values “your sample provides insufficient evidence to conclude that the population follows a distribution other than the normal distribution”. If we accept the distribution with the highest p-value as the best fit distribution but formally speaking we shouldn’t accept the null hypotesis, how strong is then the evidence given by Goodness of fit tests?

Thanks again, and sorry for the long question

August 19, 2020 at 11:26 pm

That is a great question. As I mention, this is an unusual case where we look for higher p-values. However, it’s important to note that a high p-value is just one factor. You still need to incorporate your subject area knowledge. Notice that in this post, I don’t go with the distribution that has the highest p-value (3-parameter Weibull p > 0.500). Instead I go with the lognormal distribution, which has a high (0.345) p-value but not the highest. As I discuss near the end, I use subject area knowledge to choose between the two. So, it’s not just the p-value.

Also, bear in mind that you’re looking at a range of distribution tests. Presumably some of those test will reject the null hypothesis and help rule out some distributions. Notice in this example (which uses real data) that low p-values rule out many distributions, which helps greatly in narrowing down the possibilities. Consequently, we’re not only picking by high p-values, we’re also using low p-values to rule out possibilities.

Also consider that statistical power is important. For hypothesis tests in general, when you have a small sample size, your statistical power is lower, which means it is easier to obtain high p-values. Normally, that is good because it protects you from jumping to conclusions based on larger sampling error that tends to happen in smaller samples. I write about the protective function of high p-values . However, in this scenario with distribution tests, where you want high p-values, an underpowered study can lead you in the wrong direction. Do keep an eye out for small sample sizes. I point out in this post that small samples size can cause these distribution tests to fail to identify departures from a specific distribution. Using the probability plots can help you identify some cases where a small sample deviates from the distribution being tested but the p-value is not significant. I discuss that in this post, but really focus on it in my post about using normal probability plots to assess normality . While that can help in some cases, you should always strive to have a larger sample size. I start to worry when it’s smaller than 20.

' src=

July 24, 2020 at 3:02 am

Thanks a million for this wonderful article. Honest;y, I am also one of those not very comfortable with the distributions other than normal ones. I was working on some data, for which the distributions were so different than normal and we wanted to perform linear regression. So, to even apply transformations to get to a normal shape the first step was to identify the original distribution. Your article helped me learn something new and very important.

Thanks again for sharing !

Regards, Poonam

July 28, 2020 at 12:33 am

I’m happy to hear that it was helpful! Thanks for writing! 🙂

' src=

July 22, 2020 at 7:34 am

Hello Jim, Thank you so much for this brilliant article. I am looking forward to the use cases after knowing the underlying distribution, is the article up ? Thank you 🙂

July 22, 2020 at 10:18 pm

Thanks for the reminder! It’s not up yet but I do need to get around to writing it!

' src=

July 21, 2020 at 6:33 pm

This is a wonderful article for a student like myself; who is just beginning a statistics oriented career. I want to know how do I generate those 95% CI interval plots (%fat v/s Percent). Further, Im assuming that whenever any activity such needs to be done, we would have to start off with the frequency distribution and then transition to probability distribution, correct? And this probability distribution is same as pdf? Pls help me clear off my doubts.

June 28, 2020 at 3:22 am

Hello Jim, thank you 4 making statistics so easy to understand, am pleased to inform you that I have managed to buy all your 3 books and I hope they will be of much help to me… and if u could also write about how to report research work for publication.

' src=

June 4, 2020 at 2:48 pm

okay thank you so much, but I really got the concept from your explanation, it is very clear !

June 4, 2020 at 2:55 pm

Thanks, Hana! I’m so glad to hear that!

June 4, 2020 at 9:54 am

Thanks Jim for the interesting and useful article. Do you recommend any other alternative for Minitab, maybe R package or other free software?

June 4, 2020 at 1:35 pm

I don’t have a good sense for what other software, particularly free, would be best. I’m sure it’s doable in R though.

' src=

June 3, 2020 at 5:28 am

Hi Jim Thanks for this text I want to ask you How to find Goodness of fit test when the distribution not defaults in R ?

' src=

May 22, 2020 at 10:12 am

Thank you for this awesome article; it is very much helpful. Quick question here: what should I look for when comparing the distribution of one sample against the distribution of another sample?

The end goal is to ensure that they are similar, so I imagine I want to make sure that their means are the same (an ANOVA test) and that their variances are the same (F-Test).

May 22, 2020 at 12:53 pm

There’s a distinction between identifying the distribution of your data (Normal vs. Weibull, Lognormal, etc.) and estimating the properties of your distribution. Although, identifying the distribution does involve estimating the properties for each type of distribution.

The method you write would help you determine whether those two properties (mean and variances) are different. Just be mindful of the statistical power of these tests. If you have particularly small sample sizes, the tests won’t be sensitive enough to unequal means or variances. Failing to reject the null doesn’t necessarily prove they’re equal.

Additionally, testing the means using ANOVA assumes that the variances are equal unless you use Welch’s ANOVA. I write more about this in an article about Welch’s ANOVA versus the typical F-test ANOVA .

' src=

April 18, 2020 at 3:53 pm

Good question. Once you know the distribution of your data you can actually have a better prediction of uncertainties such as likelihood of occurrence of events and corresponding impacts. You can also make some meaningful decisions by setting categories.

' src=

April 13, 2020 at 1:07 pm

Thank you for the brilliant explanation. Please, what else can i do by simply knowing the distribution of my data ?

' src=

April 13, 2020 at 9:12 am

Hi Jim, Very good explanation. Thank you so much for your effort. I have downloaded Minitab software, but unfortunately I couldn’t find the goodness of fit tab. where I can find it? Kindly reply

March 17, 2020 at 7:48 am

please, what else can i do by simply knowing the distribution of my data ?

' src=

November 14, 2019 at 5:24 am

Hello Jim, your article is very clear and easy to understand for newbie in stats. I’m looking forward for the article that shows me what I can do by simply knowing the distribution of your data. Did you have already published it? If yes, can you send me the link ?

Thanks again,

' src=

October 5, 2019 at 3:48 pm

How I can understand p-value in distribution identification with goodness of fit test?

For an example, when p-value is 0.45 with normal distribution, it means a data point can 45% probability fitting normal distribution, is it right?

Thank you very much!

October 5, 2019 at 3:58 pm

When the p-value is less than your significance level, you reject the null hypothesis. That’s the general rule. In this case, the null hypothesis states that the data follow a specific distribution, such as the normal distribution. Consequently, if the p-value is greater than the significance level, you fail to reject the null hypothesis. Your data favor the notion that it follows the distribution you are assessing. In your case, the p-value of 0.45 indicates you can reasonably assume that your data follow the normal distribution.

As for the precise meaning of the p-value, it indicates the probability of obtaining your observed sample or more extreme if the null hypothesis is true. Your sample doesn’t perfectly follow the normal distribution. No sample follows it perfectly. There’s always some deviation. The deviation between the distribution of your sample and the normal distribution, and more extreme deviations, have a 45% chance of occurring if the null hypothesis is true (i.e., that the population distribution is normally distributed). In other words, your sample is not unusual if the population is normally distributed. Hence, our conclusion that your sample follows a normal distribution. Technically, we’d say that your sample provides insufficient evidence to conclude that the population follows a distribution other than the normal distribution.

P-values are commonly misinterpreted in the manner that you state. For more information, read my post about interpreting p-values correctly .

' src=

September 8, 2019 at 11:07 am

Thanks Jim! Unfortunately I can’t find the Minitab in the CRAN repository. Is there any other way to download the package. Is it available for the for R version 3.5.1?

September 8, 2019 at 7:38 pm

Minitab is an entirely separate statistical software package–like SPSS (but different). It’s not an R function. Sorry about the confusion!

September 6, 2019 at 11:36 am

Hi Jim, Thanks for this explanation! Is minitab a function or a package? I’m wondering how you performed the Goodness of fit for multiple distributions.

Many thanks, Hanna

September 6, 2019 at 2:23 pm

Minitab is a statistical software package. Performing the various goodness-of-fit tests all at once is definitely a convenience. However, you can certainly try them one at a time. I’m not sure how other packages handle that.

' src=

August 18, 2019 at 4:24 am

Hi Jim Hope you are in the best of your health. I had a query with regard the application part white modelling severity of an event; say Claim Sizes in an insurance company, which distribution would be an ideal choice. Gamma or Lognormal? As far as I could make sense out of it, lognormal is preferable for modelling while dealing with units whose unit size is very small, Eg. alpha particles emitted per minute. Am I on the right lines? Thanks a ton

August 19, 2019 at 2:20 pm

Hi Lakshay,

I don’t know enough about claim sizes to be able to say–that’s not my field of expertise. You’ll probably need to do some research and try fitting some distributions to your data to see which one fits the best. I show you how to do that in this blog post.

Many distributions can model very small units. It’s more the overall shape of the distribution that is the limiting factor. Lognormal distributions are particularly good at modeling skewed distributions. I show an example of a lognormal distribution in this post. However, other distributions can model skewed distributions, such as the Weibull distribution. So, it depends on the precise shape of the skewness.

In general, the Weibull distribution is a very flexible distribution that can fit a wide variety of shapes. That would be a good distribution to start with if I had to name just one (besides the normal distribution). However, you should assess other distributions. Even though the Weibull distribution is very flexible, it did not provide the best fit for my real world data that I show in this post.

I hope this helps!

' src=

July 18, 2019 at 5:45 pm

Is there any difference between a distribution (hypothesis) test and goodness-of-fit test ? Or are they the same thing ?

July 18, 2019 at 9:46 pm

They’re definitely related. However, goodness-of-fit is a broader term. It includes distribution tests but it also includes measures such as R-squared, which assesses how well a regression model fits the data.

A distribution test is a more specific term that applies to tests that determine how well a probability distribution fits sample data.

Distribution tests are a subset of goodness-of-fit tests.

June 28, 2019 at 9:26 pm

Excellent article and I found it very helpful. I opened the csv data file of body fat % in Excel and I found there was 92 separate data points. Could you please let me know if this data is discrete or continuous, if you don’t mind me asking ? Thank you.

July 1, 2019 at 12:20 am

The data are recorded as percentages. Therefore, they are continuous data.

' src=

May 23, 2019 at 8:29 am

Hi jim how are you , i really wish to thank you for your indefatigable efforts towards relating your publications to the world. It helps me so very much to prepared fully against the university education. Furthermore, i will like to have a blog copy of your work. thank you.

May 24, 2019 at 12:23 am

Thanks so much for writing. I really appreciate your kind words!

The good news is that I’ll be writing a series of ebooks that goes far beyond what I can cover in my blog posts. I’ve completed the first one, which is an ebook about regression analysis . I’ll be working on others. The next one up is an introduction to statistics.

' src=

March 5, 2019 at 12:45 am

Thank you so much for your detailef email. I really appriciate it.

March 5, 2019 at 12:11 am

Thanks for your detailed reply. I am using cross-sectional continuous data of inputs (11 variables) used in crop production system. Variabilities exist within the dataset due to different level of inputs consumptions in farming systems and in some cases some inputs are even zero. Should I go for any grouping of the data. If yes, what kind of grouping approach should I use. I am basically interested in the uncertainty analysis of inputs (fuel and chemicals consumptions) and sensitivity analysis of desired output and associated environmental impacts. It will be great if you can guide me. Thanks.

March 5, 2019 at 12:23 am

Given the very specific details of your data and goals for your study, I think you’ll need to discuss this with someone who can sit down with you and go over all of it and give it the time that your study deserves. There’s just not enough information for me to go on and I don’t have the time, unfortunately, to really look into it.

One thing I can say is that if you’re trying to link your inputs to changes in the output, consider using regression analysis. For regression analysis, you only need to worry about the distribution of your residuals rather than your inputs and outputs. Regression is all about linking changes in the inputs to changes in the output. Read my post about when to use regression analysis for more information. It sounds like that might be the goal of your analysis, but I’m not sure.

Best of luck with your study!

' src=

March 4, 2019 at 4:28 pm

Thanks for the reply. No I am trying to determine the distribution of my survival curve from a published analysis. I was able to identify the survival probabilities from the published graph. The Minitab program only allows for the importation of one column. The distribution looks like a Weibull distribution but the Minitab results showed a normal distribution had the highest P value which didn’t make sense.

March 4, 2019 at 4:42 pm

Ok, in your original comment you didn’t mention that you were using a published graph. I don’t fully understand what you’re trying to do, and it’s impossible for a concrete reply without the full details. However, below are some things to consider.

Analysts often need to use their process knowledge to help them determine which distribution is appropriate. Perhaps that’s a consideration here? I also don’t know how different the p-values are. Small differences are not meaningful. Additionally, in some cases, Weibull distributions can approximate a normal distribution. Consequently, there might be only a small difference between those distributions.

But, it’s really hard to say. There’s just not enough information.

March 4, 2019 at 2:43 pm

Thanks for your reply. Yes I on the basis of p-values only I am concluding that the data is not following any distribution. My sample size is 1366 and 11 variables. None is following normal distribution. I tried Box Cox transformation and checked normality again following p-value. After transformation, the data points of some variables largly follow the line but some data points deviate from the line either at the begging or at the end.

However some variabls, the data points largely follow the line even without transformation with some points deviations at the ends. Thanks.

March 4, 2019 at 3:26 pm

You have a particularly large sample size. Consequently, you might need to focus more on the probability plots rather than the p-values. I suggest you read the section in this post that is titled “Using Probability Plots to Identify the Distribution of Your Data.” It describes how the additional power that distribution tests have with large sample sizes can detect meaningless deviations from a distribution. In those cases, using probability plots might be a better approach.

After you look at the probability plots, if an untransformed distribution fits well, I’d use that, otherwise go with the transformed.

You didn’t mention what you’re using these data for but be aware that some hypothesis tests are robust to departures from the normal distribution.

March 4, 2019 at 1:12 pm

after reading your blog, I have tried Minitab to check the distribution of my data but it surprisingly it does not follow any of the listed probability distribution. Could you please help me how should I move forward. Thanks.

March 4, 2019 at 2:27 pm

Before I can attempt to answer your question, I need to ask you several questions about your data.

What type of data are you talking about? Did the Box-Cox transformation or Johnson transformation produce a good fit? What is your sample size? Are you primarily going by p-values? If so, do any of the probability plots look good? Good meaning that the data points largely following the line. There’s the informal fat pencil test where if you put a pencil over the line, do the data points stay within it.

February 23, 2019 at 6:34 pm

Jim I enjoyed this blog. I tried to determine the distribution and parameters of a survival curve by importing into minitab. Minitab only allows 1 parameter or column while the survival curve has time on x axis and y on the y. How does one find the type of curve and parameters of a survival curve.

March 4, 2019 at 4:09 pm

Sorry about the delay in replying!

If I’m understanding your question correctly, the answer is that creating a survival plot with a survival curve is not a part of the process for identifying your distribution in Minitab that I show in this blog post. However, you can find the proper analyses in the Reliability/Survival menu in Minitab. In that menu path, there are distribution analyses for failure data specifically.

Additionally, there are other analyses in the Reliability/Survival path including the following:

Stat > Reliability/Survival > Probit Analysis.

And, if you’re using accelerated testing: Stat > Reliability/Survival > Accelerated Life Testing.

February 11, 2019 at 5:11 pm

Hi Jim, in the next-to-last graph in your post (the distribution plots), you say the Weibull plot stops abruptly at the location value of 3.32. Yet it appears to stop at more like 13-ish. Did I misunderstand something, or is the graph incorrect? Also what is the ‘scale’ metric in Weibull plots? Thanks –

February 12, 2019 at 8:35 pm

Hi Jerry, the Weibull distribution actually stops at the threshold value of ~16. The threshold value shifts the distribution along the X-axis relative to zero. Consequently, a threshold of 16 indicates the distribution starts with the lowest value of 16. Without the threshold parameter, the Weibull distribution starts at zero.

The scale parameter is similar to a measure of dispersion. For a given shape, it indicates how spread out the values are.

Here’s a nice site that shows the effect of the shape, scale, and threshold parameters for the Weibull distribution .

' src=

November 28, 2018 at 12:44 pm

Hi, Jim. Thank you so much for your detailed reply. Does the Null Hypothesis in Minitab is the dsitribution follows the specific distribution? So larger p-value cannot reject the null hypothesis. Another question is about the correlation coefficient (PPCC value), does it can denote the goodness-of-fit of each dsitribution? Thanks.

November 28, 2018 at 2:14 pm

Hi Alice, you’re very welcome!

Yes, as I detail in this post, the null hypothesis states that the data follow the specific distribution. Consequently, a low p-value suggests that you should reject the null and conclude that the data do not follow that distribution. Reread this post for more information about that aspect of these distribution tests.

Unless Minitab has changed something that I’m unaware of, you do not need to worry about PPCC when interpreting the probability plots. Again, reread this post to learn how to interpret the probability plots. With such a large sample size, it will be more important for you to interpret the probability plots rather than the p-values.

If you want to learn about PPCC for other reasons, here’s a good source of information about it: Probability Plot Correlation Coefficient .

Best of luck with your analysis!

November 27, 2018 at 5:00 pm

Hi, Jim. Besides, I have tried the calculation with 1000 data, but the p-value is extremely small. Almost the p-value of all distributions are less than 0.005. Do you have any suggestions about this?

November 28, 2018 at 1:32 am

Yes, this is the issue that I described in my first response to you. With so many data points (even 1000 is a large dataset) these tests are very powerful. Trivial departures from the distribution will produce a low p-value. That’s why you’ll likely need to focus on the probability plots for each distribution.

November 27, 2018 at 4:10 pm

Jim, Thanks for your detailed reply. Actually, I have tried the probability plots, and several distributions perform almost the same. And I used Minitab to calculate the p-value, but the software said that it is out of stock. There is no results for p-value. Do you know how to deal with it? Is the data (1,000,000) too much for the p-value calculation? Thanks.

November 28, 2018 at 1:30 am

I’m not sure. I think it might be out of memory. You should contact their technical support to find out for sure. They’ll know. Their support is quite good. You’ll reach a real person very quickly who can help you.

To answer your question, no, it’s not possible to have too many data points to calculate the p-value mathematically. But, it’s possible that the program can’t handle such a large dataset. I’m not sure about that.

November 27, 2018 at 11:43 am

Hi, Jim. Thanks for your detailed explanation. Actually, I have no experience in Minitab. I have a large matrix (1000000*13), but I found that when I went to Stat > Quality Tools > Individual Distribution Identification in Minitab, It only can do the single column data analysis. And it seems that it takes much time for 1,000,000 data. Do you have any sugggestions about how to find the appropriate distribution for 13 columns with 1,000,000 data?

November 27, 2018 at 1:54 pm

Yes, that tool analyzes individual columns only. It assesses the distribution of a single variable. If you’re looking for some sort of multivariate distribution analysis, it won’t do that.

I think Minitab is good software, but it can struggle with extremely large datasets like yours.

One thing to be aware of is that with so many data points, the distribution tests become extremely powerful. They will be so powerful that they can detect trivial departures from a distribution. In other words, your data might follow a specific distribution, but the test is so powerful that it will reject the null hypothesis that it follows that distribution. For such a large dataset, pay particular attention to the probability plots! If those look good but the p-value is significant, go with the graph!

' src=

September 24, 2018 at 4:56 am

Hi Jim, very nicely explained. Thank you so much for your effort. Are your blogs are available in printable version?

September 24, 2018 at 9:58 am

Hi Rashmi, thank you so much for your kind words. I really appreciate them!

I’m working on ebooks that contain the blog material plus a lot more! The first one should be available early next year.

' src=

September 13, 2018 at 7:28 pm

Hello. I have a question. I have genome data has lots of zeros. I want to check the distribution of this data. It could be continuous or discrete. We can use ks.test, but this is for continuous distributions. Is there any way to check if the data follows a specific discrete distribution? Thank you

September 14, 2018 at 11:55 pm

Hi, there are distribution tests for discrete data. I’d start by reading my post about goodness-of-fit tests for discrete distributions and see if that helps.

' src=

September 10, 2018 at 9:45 am

Hi Jim, Great sharing. I have perform an identification of distribution of my nonnormal data, however none of the distribution have good fit to my data. All the p-value < 0.05. What are the different approach I can use to study the distribution model before I can perform capability analysis 🙂 Thanks for your help

' src=

February 7, 2018 at 5:42 am

Hi Jim! I’m trying to test the distribution of my data in SPSS and have used the One-Sample Kolmogorov-Smirnov Test which test for normal, uniform, poisson or exponential distribution. Non of them fit my data… How do I preoceed, I don’t know how to work in R or MiniTab, so do you know if there’s another test in SPSS I can do or do I have to learn a new program? I need to know the distribution to be able to choose the right model for the GLM test I’m gonna do.

February 7, 2018 at 2:38 pm

Hi Alice! Unfortunately, I haven’t used SPSS in quite some time and I’m not familiar with its distribution testing capabilities. The one additional distribution that I’d check is the Weibull distribution. That is a particularly flexible distribution that can fit many different shapes–but I don’t see that in your list.

' src=

January 22, 2018 at 2:30 pm

Very good explanation!!

January 22, 2018 at 2:33 pm

Thank you, Maria!

' src=

September 5, 2017 at 12:16 am

Great explanation, thanks Jim.

September 5, 2017 at 12:55 am

Thank you, Olayan!

' src=

May 1, 2017 at 1:32 am

Thanks Jim. I am going to try the same implementation using stata and/or R.

May 1, 2017 at 1:54 am

You’re very welcome Wilbrod! Best of luck with your analysis!

' src=

April 26, 2017 at 2:36 pm

Great article! 🙂

What software are you using to evaluate the distribution of the data?

April 26, 2017 at 3:18 pm

Hi Charles, thanks and I’m glad you found it helpful! I’m using Minitab statistical software.

' src=

April 26, 2017 at 1:06 pm

Hello Jim, what kind of statistics software do you use?

April 26, 2017 at 2:45 pm

Hi Ricardo, I’m using Minitab statistical software.

' src=

April 26, 2017 at 10:48 pm

what a fantastic example Jim!

April 26, 2017 at 11:04 pm

Thank you so much, Muhammad!

Comments and Questions Cancel reply

9.3 Distribution Needed for Hypothesis Testing

Earlier in the course, we discussed sampling distributions. Particular distributions are associated with hypothesis testing. Perform tests of a population mean using a normal distribution or a Student's t -distribution . (Remember, use a Student's t -distribution when the population standard deviation is unknown and the distribution of the sample mean is approximately normal.) We perform tests of a population proportion using a normal distribution (usually n is large).

Assumptions

When you perform a hypothesis test of a single population mean μ using a Student's t -distribution (often called a t -test), there are fundamental assumptions that need to be met in order for the test to work properly. Your data should be a simple random sample that comes from a population that is approximately normally distributed . You use the sample standard deviation to approximate the population standard deviation. Note that if the sample size is sufficiently large, a t -test will work even if the population is not approximately normally distributed.

When you perform a hypothesis test of a single population mean μ using a normal distribution (often called a z -test), you take a simple random sample from the population. The population you are testing is normally distributed or your sample size is sufficiently large. You know the value of the population standard deviation which, in reality, is rarely known.

When you perform a hypothesis test of a single population proportion p , you take a simple random sample from the population. You must meet the conditions for a binomial distribution , which are the following: there are a certain number n of independent trials, the outcomes of any trial are success or failure, and each trial has the same probability of a success p . The shape of the binomial distribution needs to be similar to the shape of the normal distribution. To ensure this, the quantities np and nq must both be greater than five ( np > 5 and nq > 5). Then the binomial distribution of a sample (estimated) proportion can be approximated by the normal distribution with μ = p and σ = p q n σ = p q n . Remember that q = 1 – p .

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute Texas Education Agency (TEA). The original material is available at: https://www.texasgateway.org/book/tea-statistics . Changes were made to the original material, including updates to art, structure, and other content updates.

Access for free at https://openstax.org/books/statistics/pages/1-introduction
  • Authors: Barbara Illowsky, Susan Dean
  • Publisher/website: OpenStax
  • Book title: Statistics
  • Publication date: Mar 27, 2020
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/statistics/pages/1-introduction
  • Section URL: https://openstax.org/books/statistics/pages/9-3-distribution-needed-for-hypothesis-testing

© Apr 16, 2024 Texas Education Agency (TEA). The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Tutorial Playlist

Statistics tutorial, everything you need to know about the probability density function in statistics, the best guide to understand central limit theorem, an in-depth guide to measures of central tendency : mean, median and mode, the ultimate guide to understand conditional probability.

A Comprehensive Look at Percentile in Statistics

The Best Guide to Understand Bayes Theorem

Everything you need to know about the normal distribution, an in-depth explanation of cumulative distribution function, chi-square test, what is hypothesis testing in statistics types and examples, understanding the fundamentals of arithmetic and geometric progression, the definitive guide to understand spearman’s rank correlation, mean squared error: overview, examples, concepts and more, all you need to know about the empirical rule in statistics, the complete guide to skewness and kurtosis, a holistic look at bernoulli distribution.

All You Need to Know About Bias in Statistics

A Complete Guide to Get a Grasp of Time Series Analysis

The Key Differences Between Z-Test Vs. T-Test

The Complete Guide to Understand Pearson's Correlation

A complete guide on the types of statistical studies, everything you need to know about poisson distribution, your best guide to understand correlation vs. regression, the most comprehensive guide for beginners on what is correlation, hypothesis testing in statistics - types | examples.

Lesson 10 of 24 By Avijeet Biswal

What Is Hypothesis Testing in Statistics? Types and Examples

Table of Contents

In today’s data-driven world, decisions are based on data all the time. Hypothesis plays a crucial role in that process, whether it may be making business decisions, in the health sector, academia, or in quality improvement. Without hypothesis & hypothesis tests, you risk drawing the wrong conclusions and making bad decisions. In this tutorial, you will look at Hypothesis Testing in Statistics.

The Ultimate Ticket to Top Data Science Job Roles

The Ultimate Ticket to Top Data Science Job Roles

What Is Hypothesis Testing in Statistics?

Hypothesis Testing is a type of statistical analysis in which you put your assumptions about a population parameter to the test. It is used to estimate the relationship between 2 statistical variables.

Let's discuss few examples of statistical hypothesis from real-life - 

  • A teacher assumes that 60% of his college's students come from lower-middle-class families.
  • A doctor believes that 3D (Diet, Dose, and Discipline) is 90% effective for diabetic patients.

Now that you know about hypothesis testing, look at the two types of hypothesis testing in statistics.

Hypothesis Testing Formula

Z = ( x̅ – μ0 ) / (σ /√n)

  • Here, x̅ is the sample mean,
  • μ0 is the population mean,
  • σ is the standard deviation,
  • n is the sample size.

How Hypothesis Testing Works?

An analyst performs hypothesis testing on a statistical sample to present evidence of the plausibility of the null hypothesis. Measurements and analyses are conducted on a random sample of the population to test a theory. Analysts use a random population sample to test two hypotheses: the null and alternative hypotheses.

The null hypothesis is typically an equality hypothesis between population parameters; for example, a null hypothesis may claim that the population means return equals zero. The alternate hypothesis is essentially the inverse of the null hypothesis (e.g., the population means the return is not equal to zero). As a result, they are mutually exclusive, and only one can be correct. One of the two possibilities, however, will always be correct.

Your Dream Career is Just Around The Corner!

Your Dream Career is Just Around The Corner!

Null Hypothesis and Alternative Hypothesis

The Null Hypothesis is the assumption that the event will not occur. A null hypothesis has no bearing on the study's outcome unless it is rejected.

H0 is the symbol for it, and it is pronounced H-naught.

The Alternate Hypothesis is the logical opposite of the null hypothesis. The acceptance of the alternative hypothesis follows the rejection of the null hypothesis. H1 is the symbol for it.

Let's understand this with an example.

A sanitizer manufacturer claims that its product kills 95 percent of germs on average. 

To put this company's claim to the test, create a null and alternate hypothesis.

H0 (Null Hypothesis): Average = 95%.

Alternative Hypothesis (H1): The average is less than 95%.

Another straightforward example to understand this concept is determining whether or not a coin is fair and balanced. The null hypothesis states that the probability of a show of heads is equal to the likelihood of a show of tails. In contrast, the alternate theory states that the probability of a show of heads and tails would be very different.

Become a Data Scientist with Hands-on Training!

Become a Data Scientist with Hands-on Training!

Hypothesis Testing Calculation With Examples

Let's consider a hypothesis test for the average height of women in the United States. Suppose our null hypothesis is that the average height is 5'4". We gather a sample of 100 women and determine that their average height is 5'5". The standard deviation of population is 2.

To calculate the z-score, we would use the following formula:

z = ( x̅ – μ0 ) / (σ /√n)

z = (5'5" - 5'4") / (2" / √100)

z = 0.5 / (0.045)

We will reject the null hypothesis as the z-score of 11.11 is very large and conclude that there is evidence to suggest that the average height of women in the US is greater than 5'4".

Steps in Hypothesis Testing

Hypothesis testing is a statistical method to determine if there is enough evidence in a sample of data to infer that a certain condition is true for the entire population. Here’s a breakdown of the typical steps involved in hypothesis testing:

Formulate Hypotheses

  • Null Hypothesis (H0): This hypothesis states that there is no effect or difference, and it is the hypothesis you attempt to reject with your test.
  • Alternative Hypothesis (H1 or Ha): This hypothesis is what you might believe to be true or hope to prove true. It is usually considered the opposite of the null hypothesis.

Choose the Significance Level (α)

The significance level, often denoted by alpha (α), is the probability of rejecting the null hypothesis when it is true. Common choices for α are 0.05 (5%), 0.01 (1%), and 0.10 (10%).

Select the Appropriate Test

Choose a statistical test based on the type of data and the hypothesis. Common tests include t-tests, chi-square tests, ANOVA, and regression analysis. The selection depends on data type, distribution, sample size, and whether the hypothesis is one-tailed or two-tailed.

Collect Data

Gather the data that will be analyzed in the test. This data should be representative of the population to infer conclusions accurately.

Calculate the Test Statistic

Based on the collected data and the chosen test, calculate a test statistic that reflects how much the observed data deviates from the null hypothesis.

Determine the p-value

The p-value is the probability of observing test results at least as extreme as the results observed, assuming the null hypothesis is correct. It helps determine the strength of the evidence against the null hypothesis.

Make a Decision

Compare the p-value to the chosen significance level:

  • If the p-value ≤ α: Reject the null hypothesis, suggesting sufficient evidence in the data supports the alternative hypothesis.
  • If the p-value > α: Do not reject the null hypothesis, suggesting insufficient evidence to support the alternative hypothesis.

Report the Results

Present the findings from the hypothesis test, including the test statistic, p-value, and the conclusion about the hypotheses.

Perform Post-hoc Analysis (if necessary)

Depending on the results and the study design, further analysis may be needed to explore the data more deeply or to address multiple comparisons if several hypotheses were tested simultaneously.

Types of Hypothesis Testing

To determine whether a discovery or relationship is statistically significant, hypothesis testing uses a z-test. It usually checks to see if two means are the same (the null hypothesis). Only when the population standard deviation is known and the sample size is 30 data points or more, can a z-test be applied.

A statistical test called a t-test is employed to compare the means of two groups. To determine whether two groups differ or if a procedure or treatment affects the population of interest, it is frequently used in hypothesis testing.

Chi-Square 

You utilize a Chi-square test for hypothesis testing concerning whether your data is as predicted. To determine if the expected and observed results are well-fitted, the Chi-square test analyzes the differences between categorical variables from a random sample. The test's fundamental premise is that the observed values in your data should be compared to the predicted values that would be present if the null hypothesis were true.

Hypothesis Testing and Confidence Intervals

Both confidence intervals and hypothesis tests are inferential techniques that depend on approximating the sample distribution. Data from a sample is used to estimate a population parameter using confidence intervals. Data from a sample is used in hypothesis testing to examine a given hypothesis. We must have a postulated parameter to conduct hypothesis testing.

Bootstrap distributions and randomization distributions are created using comparable simulation techniques. The observed sample statistic is the focal point of a bootstrap distribution, whereas the null hypothesis value is the focal point of a randomization distribution.

A variety of feasible population parameter estimates are included in confidence ranges. In this lesson, we created just two-tailed confidence intervals. There is a direct connection between these two-tail confidence intervals and these two-tail hypothesis tests. The results of a two-tailed hypothesis test and two-tailed confidence intervals typically provide the same results. In other words, a hypothesis test at the 0.05 level will virtually always fail to reject the null hypothesis if the 95% confidence interval contains the predicted value. A hypothesis test at the 0.05 level will nearly certainly reject the null hypothesis if the 95% confidence interval does not include the hypothesized parameter.

Become a Data Scientist through hands-on learning with hackathons, masterclasses, webinars, and Ask-Me-Anything! Start learning now!

Simple and Composite Hypothesis Testing

Depending on the population distribution, you can classify the statistical hypothesis into two types.

Simple Hypothesis: A simple hypothesis specifies an exact value for the parameter.

Composite Hypothesis: A composite hypothesis specifies a range of values.

A company is claiming that their average sales for this quarter are 1000 units. This is an example of a simple hypothesis.

Suppose the company claims that the sales are in the range of 900 to 1000 units. Then this is a case of a composite hypothesis.

One-Tailed and Two-Tailed Hypothesis Testing

The One-Tailed test, also called a directional test, considers a critical region of data that would result in the null hypothesis being rejected if the test sample falls into it, inevitably meaning the acceptance of the alternate hypothesis.

In a one-tailed test, the critical distribution area is one-sided, meaning the test sample is either greater or lesser than a specific value.

In two tails, the test sample is checked to be greater or less than a range of values in a Two-Tailed test, implying that the critical distribution area is two-sided.

If the sample falls within this range, the alternate hypothesis will be accepted, and the null hypothesis will be rejected.

Become a Data Scientist With Real-World Experience

Become a Data Scientist With Real-World Experience

Right Tailed Hypothesis Testing

If the larger than (>) sign appears in your hypothesis statement, you are using a right-tailed test, also known as an upper test. Or, to put it another way, the disparity is to the right. For instance, you can contrast the battery life before and after a change in production. Your hypothesis statements can be the following if you want to know if the battery life is longer than the original (let's say 90 hours):

  • The null hypothesis is (H0 <= 90) or less change.
  • A possibility is that battery life has risen (H1) > 90.

The crucial point in this situation is that the alternate hypothesis (H1), not the null hypothesis, decides whether you get a right-tailed test.

Left Tailed Hypothesis Testing

Alternative hypotheses that assert the true value of a parameter is lower than the null hypothesis are tested with a left-tailed test; they are indicated by the asterisk "<".

Suppose H0: mean = 50 and H1: mean not equal to 50

According to the H1, the mean can be greater than or less than 50. This is an example of a Two-tailed test.

In a similar manner, if H0: mean >=50, then H1: mean <50

Here the mean is less than 50. It is called a One-tailed test.

Type 1 and Type 2 Error

A hypothesis test can result in two types of errors.

Type 1 Error: A Type-I error occurs when sample results reject the null hypothesis despite being true.

Type 2 Error: A Type-II error occurs when the null hypothesis is not rejected when it is false, unlike a Type-I error.

Suppose a teacher evaluates the examination paper to decide whether a student passes or fails.

H0: Student has passed

H1: Student has failed

Type I error will be the teacher failing the student [rejects H0] although the student scored the passing marks [H0 was true]. 

Type II error will be the case where the teacher passes the student [do not reject H0] although the student did not score the passing marks [H1 is true].

Our Data Scientist Master's Program covers core topics such as R, Python, Machine Learning, Tableau, Hadoop, and Spark. Get started on your journey today!

Limitations of Hypothesis Testing

Hypothesis testing has some limitations that researchers should be aware of:

  • It cannot prove or establish the truth: Hypothesis testing provides evidence to support or reject a hypothesis, but it cannot confirm the absolute truth of the research question.
  • Results are sample-specific: Hypothesis testing is based on analyzing a sample from a population, and the conclusions drawn are specific to that particular sample.
  • Possible errors: During hypothesis testing, there is a chance of committing type I error (rejecting a true null hypothesis) or type II error (failing to reject a false null hypothesis).
  • Assumptions and requirements: Different tests have specific assumptions and requirements that must be met to accurately interpret results.

Learn All The Tricks Of The BI Trade

Learn All The Tricks Of The BI Trade

After reading this tutorial, you would have a much better understanding of hypothesis testing, one of the most important concepts in the field of Data Science . The majority of hypotheses are based on speculation about observed behavior, natural phenomena, or established theories.

If you are interested in statistics of data science and skills needed for such a career, you ought to explore the Post Graduate Program in Data Science.

If you have any questions regarding this ‘Hypothesis Testing In Statistics’ tutorial, do share them in the comment section. Our subject matter expert will respond to your queries. Happy learning!

1. What is hypothesis testing in statistics with example?

Hypothesis testing is a statistical method used to determine if there is enough evidence in a sample data to draw conclusions about a population. It involves formulating two competing hypotheses, the null hypothesis (H0) and the alternative hypothesis (Ha), and then collecting data to assess the evidence. An example: testing if a new drug improves patient recovery (Ha) compared to the standard treatment (H0) based on collected patient data.

2. What is H0 and H1 in statistics?

In statistics, H0​ and H1​ represent the null and alternative hypotheses. The null hypothesis, H0​, is the default assumption that no effect or difference exists between groups or conditions. The alternative hypothesis, H1​, is the competing claim suggesting an effect or a difference. Statistical tests determine whether to reject the null hypothesis in favor of the alternative hypothesis based on the data.

3. What is a simple hypothesis with an example?

A simple hypothesis is a specific statement predicting a single relationship between two variables. It posits a direct and uncomplicated outcome. For example, a simple hypothesis might state, "Increased sunlight exposure increases the growth rate of sunflowers." Here, the hypothesis suggests a direct relationship between the amount of sunlight (independent variable) and the growth rate of sunflowers (dependent variable), with no additional variables considered.

4. What are the 3 major types of hypothesis?

The three major types of hypotheses are:

  • Null Hypothesis (H0): Represents the default assumption, stating that there is no significant effect or relationship in the data.
  • Alternative Hypothesis (Ha): Contradicts the null hypothesis and proposes a specific effect or relationship that researchers want to investigate.
  • Nondirectional Hypothesis: An alternative hypothesis that doesn't specify the direction of the effect, leaving it open for both positive and negative possibilities.

Find our PL-300 Microsoft Power BI Certification Training Online Classroom training classes in top cities:

NameDatePlace
21 Sep -6 Oct 2024,
Weekend batch
Your City
12 Oct -27 Oct 2024,
Weekend batch
Your City
26 Oct -10 Nov 2024,
Weekend batch
Your City

About the Author

Avijeet Biswal

Avijeet is a Senior Research Analyst at Simplilearn. Passionate about Data Analytics, Machine Learning, and Deep Learning, Avijeet is also interested in politics, cricket, and football.

Recommended Resources

The Key Differences Between Z-Test Vs. T-Test

Free eBook: Top Programming Languages For A Data Scientist

Normality Test in Minitab: Minitab with Statistics

Normality Test in Minitab: Minitab with Statistics

A Comprehensive Look at Percentile in Statistics

Machine Learning Career Guide: A Playbook to Becoming a Machine Learning Engineer

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.
  • Data Science
  • Data Analysis
  • Data Visualization
  • Machine Learning
  • Deep Learning
  • Computer Vision
  • Artificial Intelligence
  • AI ML DS Interview Series
  • AI ML DS Projects series
  • Data Engineering
  • Web Scrapping

Understanding Hypothesis Testing

Hypothesis testing involves formulating assumptions about population parameters based on sample statistics and rigorously evaluating these assumptions against empirical evidence. This article sheds light on the significance of hypothesis testing and the critical steps involved in the process.

What is Hypothesis Testing?

A hypothesis is an assumption or idea, specifically a statistical claim about an unknown population parameter. For example, a judge assumes a person is innocent and verifies this by reviewing evidence and hearing testimony before reaching a verdict.

Hypothesis testing is a statistical method that is used to make a statistical decision using experimental data. Hypothesis testing is basically an assumption that we make about a population parameter. It evaluates two mutually exclusive statements about a population to determine which statement is best supported by the sample data. 

To test the validity of the claim or assumption about the population parameter:

  • A sample is drawn from the population and analyzed.
  • The results of the analysis are used to decide whether the claim is true or not.
Example: You say an average height in the class is 30 or a boy is taller than a girl. All of these is an assumption that we are assuming, and we need some statistical way to prove these. We need some mathematical conclusion whatever we are assuming is true.

Defining Hypotheses

  • Null hypothesis (H 0 ): In statistics, the null hypothesis is a general statement or default position that there is no relationship between two measured cases or no relationship among groups. In other words, it is a basic assumption or made based on the problem knowledge. Example : A company’s mean production is 50 units/per da H 0 : [Tex]\mu [/Tex] = 50.
  • Alternative hypothesis (H 1 ): The alternative hypothesis is the hypothesis used in hypothesis testing that is contrary to the null hypothesis.  Example: A company’s production is not equal to 50 units/per day i.e. H 1 : [Tex]\mu [/Tex] [Tex]\ne [/Tex] 50.

Key Terms of Hypothesis Testing

  • Level of significance : It refers to the degree of significance in which we accept or reject the null hypothesis. 100% accuracy is not possible for accepting a hypothesis, so we, therefore, select a level of significance that is usually 5%. This is normally denoted with  [Tex]\alpha[/Tex] and generally, it is 0.05 or 5%, which means your output should be 95% confident to give a similar kind of result in each sample.
  • P-value: The P value , or calculated probability, is the probability of finding the observed/extreme results when the null hypothesis(H0) of a study-given problem is true. If your P-value is less than the chosen significance level then you reject the null hypothesis i.e. accept that your sample claims to support the alternative hypothesis.
  • Test Statistic: The test statistic is a numerical value calculated from sample data during a hypothesis test, used to determine whether to reject the null hypothesis. It is compared to a critical value or p-value to make decisions about the statistical significance of the observed results.
  • Critical value : The critical value in statistics is a threshold or cutoff point used to determine whether to reject the null hypothesis in a hypothesis test.
  • Degrees of freedom: Degrees of freedom are associated with the variability or freedom one has in estimating a parameter. The degrees of freedom are related to the sample size and determine the shape.

Why do we use Hypothesis Testing?

Hypothesis testing is an important procedure in statistics. Hypothesis testing evaluates two mutually exclusive population statements to determine which statement is most supported by sample data. When we say that the findings are statistically significant, thanks to hypothesis testing. 

One-Tailed and Two-Tailed Test

One tailed test focuses on one direction, either greater than or less than a specified value. We use a one-tailed test when there is a clear directional expectation based on prior knowledge or theory. The critical region is located on only one side of the distribution curve. If the sample falls into this critical region, the null hypothesis is rejected in favor of the alternative hypothesis.

One-Tailed Test

There are two types of one-tailed test:

  • Left-Tailed (Left-Sided) Test: The alternative hypothesis asserts that the true parameter value is less than the null hypothesis. Example: H 0 ​: [Tex]\mu \geq 50 [/Tex] and H 1 : [Tex]\mu < 50 [/Tex]
  • Right-Tailed (Right-Sided) Test : The alternative hypothesis asserts that the true parameter value is greater than the null hypothesis. Example: H 0 : [Tex]\mu \leq50 [/Tex] and H 1 : [Tex]\mu > 50 [/Tex]

Two-Tailed Test

A two-tailed test considers both directions, greater than and less than a specified value.We use a two-tailed test when there is no specific directional expectation, and want to detect any significant difference.

Example: H 0 : [Tex]\mu = [/Tex] 50 and H 1 : [Tex]\mu \neq 50 [/Tex]

To delve deeper into differences into both types of test: Refer to link

What are Type 1 and Type 2 errors in Hypothesis Testing?

In hypothesis testing, Type I and Type II errors are two possible errors that researchers can make when drawing conclusions about a population based on a sample of data. These errors are associated with the decisions made regarding the null hypothesis and the alternative hypothesis.

  • Type I error: When we reject the null hypothesis, although that hypothesis was true. Type I error is denoted by alpha( [Tex]\alpha [/Tex] ).
  • Type II errors : When we accept the null hypothesis, but it is false. Type II errors are denoted by beta( [Tex]\beta [/Tex] ).


Null Hypothesis is True

Null Hypothesis is False

Null Hypothesis is True (Accept)

Correct Decision

Type II Error (False Negative)

Alternative Hypothesis is True (Reject)

Type I Error (False Positive)

Correct Decision

How does Hypothesis Testing work?

Step 1: define null and alternative hypothesis.

State the null hypothesis ( [Tex]H_0 [/Tex] ), representing no effect, and the alternative hypothesis ( [Tex]H_1 [/Tex] ​), suggesting an effect or difference.

We first identify the problem about which we want to make an assumption keeping in mind that our assumption should be contradictory to one another, assuming Normally distributed data.

Step 2 – Choose significance level

Select a significance level ( [Tex]\alpha [/Tex] ), typically 0.05, to determine the threshold for rejecting the null hypothesis. It provides validity to our hypothesis test, ensuring that we have sufficient data to back up our claims. Usually, we determine our significance level beforehand of the test. The p-value is the criterion used to calculate our significance value.

Step 3 – Collect and Analyze data.

Gather relevant data through observation or experimentation. Analyze the data using appropriate statistical methods to obtain a test statistic.

Step 4-Calculate Test Statistic

The data for the tests are evaluated in this step we look for various scores based on the characteristics of data. The choice of the test statistic depends on the type of hypothesis test being conducted.

There are various hypothesis tests, each appropriate for various goal to calculate our test. This could be a Z-test , Chi-square , T-test , and so on.

  • Z-test : If population means and standard deviations are known. Z-statistic is commonly used.
  • t-test : If population standard deviations are unknown. and sample size is small than t-test statistic is more appropriate.
  • Chi-square test : Chi-square test is used for categorical data or for testing independence in contingency tables
  • F-test : F-test is often used in analysis of variance (ANOVA) to compare variances or test the equality of means across multiple groups.

We have a smaller dataset, So, T-test is more appropriate to test our hypothesis.

T-statistic is a measure of the difference between the means of two groups relative to the variability within each group. It is calculated as the difference between the sample means divided by the standard error of the difference. It is also known as the t-value or t-score.

Step 5 – Comparing Test Statistic:

In this stage, we decide where we should accept the null hypothesis or reject the null hypothesis. There are two ways to decide where we should accept or reject the null hypothesis.

Method A: Using Crtical values

Comparing the test statistic and tabulated critical value we have,

  • If Test Statistic>Critical Value: Reject the null hypothesis.
  • If Test Statistic≤Critical Value: Fail to reject the null hypothesis.

Note: Critical values are predetermined threshold values that are used to make a decision in hypothesis testing. To determine critical values for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.

Method B: Using P-values

We can also come to an conclusion using the p-value,

  • If the p-value is less than or equal to the significance level i.e. ( [Tex]p\leq\alpha [/Tex] ), you reject the null hypothesis. This indicates that the observed results are unlikely to have occurred by chance alone, providing evidence in favor of the alternative hypothesis.
  • If the p-value is greater than the significance level i.e. ( [Tex]p\geq \alpha[/Tex] ), you fail to reject the null hypothesis. This suggests that the observed results are consistent with what would be expected under the null hypothesis.

Note : The p-value is the probability of obtaining a test statistic as extreme as, or more extreme than, the one observed in the sample, assuming the null hypothesis is true. To determine p-value for hypothesis testing, we typically refer to a statistical distribution table , such as the normal distribution or t-distribution tables based on.

Step 7- Interpret the Results

At last, we can conclude our experiment using method A or B.

Calculating test statistic

To validate our hypothesis about a population parameter we use statistical functions . We use the z-score, p-value, and level of significance(alpha) to make evidence for our hypothesis for normally distributed data .

1. Z-statistics:

When population means and standard deviations are known.

[Tex]z = \frac{\bar{x} – \mu}{\frac{\sigma}{\sqrt{n}}}[/Tex]

  • [Tex]\bar{x} [/Tex] is the sample mean,
  • μ represents the population mean, 
  • σ is the standard deviation
  • and n is the size of the sample.

2. T-Statistics

T test is used when n<30,

t-statistic calculation is given by:

[Tex]t=\frac{x̄-μ}{s/\sqrt{n}} [/Tex]

  • t = t-score,
  • x̄ = sample mean
  • μ = population mean,
  • s = standard deviation of the sample,
  • n = sample size

3. Chi-Square Test

Chi-Square Test for Independence categorical Data (Non-normally distributed) using:

[Tex]\chi^2 = \sum \frac{(O_{ij} – E_{ij})^2}{E_{ij}}[/Tex]

  • [Tex]O_{ij}[/Tex] is the observed frequency in cell [Tex]{ij} [/Tex]
  • i,j are the rows and columns index respectively.
  • [Tex]E_{ij}[/Tex] is the expected frequency in cell [Tex]{ij}[/Tex] , calculated as : [Tex]\frac{{\text{{Row total}} \times \text{{Column total}}}}{{\text{{Total observations}}}}[/Tex]

Real life Examples of Hypothesis Testing

Let’s examine hypothesis testing using two real life situations,

Case A: D oes a New Drug Affect Blood Pressure?

Imagine a pharmaceutical company has developed a new drug that they believe can effectively lower blood pressure in patients with hypertension. Before bringing the drug to market, they need to conduct a study to assess its impact on blood pressure.

  • Before Treatment: 120, 122, 118, 130, 125, 128, 115, 121, 123, 119
  • After Treatment: 115, 120, 112, 128, 122, 125, 110, 117, 119, 114

Step 1 : Define the Hypothesis

  • Null Hypothesis : (H 0 )The new drug has no effect on blood pressure.
  • Alternate Hypothesis : (H 1 )The new drug has an effect on blood pressure.

Step 2: Define the Significance level

Let’s consider the Significance level at 0.05, indicating rejection of the null hypothesis.

If the evidence suggests less than a 5% chance of observing the results due to random variation.

Step 3 : Compute the test statistic

Using paired T-test analyze the data to obtain a test statistic and a p-value.

The test statistic (e.g., T-statistic) is calculated based on the differences between blood pressure measurements before and after treatment.

t = m/(s/√n)

  • m  = mean of the difference i.e X after, X before
  • s  = standard deviation of the difference (d) i.e d i ​= X after, i ​− X before,
  • n  = sample size,

then, m= -3.9, s= 1.8 and n= 10

we, calculate the , T-statistic = -9 based on the formula for paired t test

Step 4: Find the p-value

The calculated t-statistic is -9 and degrees of freedom df = 9, you can find the p-value using statistical software or a t-distribution table.

thus, p-value = 8.538051223166285e-06

Step 5: Result

  • If the p-value is less than or equal to 0.05, the researchers reject the null hypothesis.
  • If the p-value is greater than 0.05, they fail to reject the null hypothesis.

Conclusion: Since the p-value (8.538051223166285e-06) is less than the significance level (0.05), the researchers reject the null hypothesis. There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.

Python Implementation of Case A

Let’s create hypothesis testing with python, where we are testing whether a new drug affects blood pressure. For this example, we will use a paired T-test. We’ll use the scipy.stats library for the T-test.

Scipy is a mathematical library in Python that is mostly used for mathematical equations and computations.

We will implement our first real life problem via python,

import numpy as np from scipy import stats # Data before_treatment = np . array ([ 120 , 122 , 118 , 130 , 125 , 128 , 115 , 121 , 123 , 119 ]) after_treatment = np . array ([ 115 , 120 , 112 , 128 , 122 , 125 , 110 , 117 , 119 , 114 ]) # Step 1: Null and Alternate Hypotheses # Null Hypothesis: The new drug has no effect on blood pressure. # Alternate Hypothesis: The new drug has an effect on blood pressure. null_hypothesis = "The new drug has no effect on blood pressure." alternate_hypothesis = "The new drug has an effect on blood pressure." # Step 2: Significance Level alpha = 0.05 # Step 3: Paired T-test t_statistic , p_value = stats . ttest_rel ( after_treatment , before_treatment ) # Step 4: Calculate T-statistic manually m = np . mean ( after_treatment - before_treatment ) s = np . std ( after_treatment - before_treatment , ddof = 1 ) # using ddof=1 for sample standard deviation n = len ( before_treatment ) t_statistic_manual = m / ( s / np . sqrt ( n )) # Step 5: Decision if p_value <= alpha : decision = "Reject" else : decision = "Fail to reject" # Conclusion if decision == "Reject" : conclusion = "There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different." else : conclusion = "There is insufficient evidence to claim a significant difference in average blood pressure before and after treatment with the new drug." # Display results print ( "T-statistic (from scipy):" , t_statistic ) print ( "P-value (from scipy):" , p_value ) print ( "T-statistic (calculated manually):" , t_statistic_manual ) print ( f "Decision: { decision } the null hypothesis at alpha= { alpha } ." ) print ( "Conclusion:" , conclusion )

T-statistic (from scipy): -9.0 P-value (from scipy): 8.538051223166285e-06 T-statistic (calculated manually): -9.0 Decision: Reject the null hypothesis at alpha=0.05. Conclusion: There is statistically significant evidence that the average blood pressure before and after treatment with the new drug is different.

In the above example, given the T-statistic of approximately -9 and an extremely small p-value, the results indicate a strong case to reject the null hypothesis at a significance level of 0.05. 

  • The results suggest that the new drug, treatment, or intervention has a significant effect on lowering blood pressure.
  • The negative T-statistic indicates that the mean blood pressure after treatment is significantly lower than the assumed population mean before treatment.

Case B : Cholesterol level in a population

Data: A sample of 25 individuals is taken, and their cholesterol levels are measured.

Cholesterol Levels (mg/dL): 205, 198, 210, 190, 215, 205, 200, 192, 198, 205, 198, 202, 208, 200, 205, 198, 205, 210, 192, 205, 198, 205, 210, 192, 205.

Populations Mean = 200

Population Standard Deviation (σ): 5 mg/dL(given for this problem)

Step 1: Define the Hypothesis

  • Null Hypothesis (H 0 ): The average cholesterol level in a population is 200 mg/dL.
  • Alternate Hypothesis (H 1 ): The average cholesterol level in a population is different from 200 mg/dL.

As the direction of deviation is not given , we assume a two-tailed test, and based on a normal distribution table, the critical values for a significance level of 0.05 (two-tailed) can be calculated through the z-table and are approximately -1.96 and 1.96.

The test statistic is calculated by using the z formula Z = [Tex](203.8 – 200) / (5 \div \sqrt{25}) [/Tex] ​ and we get accordingly , Z =2.039999999999992.

Step 4: Result

Since the absolute value of the test statistic (2.04) is greater than the critical value (1.96), we reject the null hypothesis. And conclude that, there is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL

Python Implementation of Case B

import scipy.stats as stats import math import numpy as np # Given data sample_data = np . array ( [ 205 , 198 , 210 , 190 , 215 , 205 , 200 , 192 , 198 , 205 , 198 , 202 , 208 , 200 , 205 , 198 , 205 , 210 , 192 , 205 , 198 , 205 , 210 , 192 , 205 ]) population_std_dev = 5 population_mean = 200 sample_size = len ( sample_data ) # Step 1: Define the Hypotheses # Null Hypothesis (H0): The average cholesterol level in a population is 200 mg/dL. # Alternate Hypothesis (H1): The average cholesterol level in a population is different from 200 mg/dL. # Step 2: Define the Significance Level alpha = 0.05 # Two-tailed test # Critical values for a significance level of 0.05 (two-tailed) critical_value_left = stats . norm . ppf ( alpha / 2 ) critical_value_right = - critical_value_left # Step 3: Compute the test statistic sample_mean = sample_data . mean () z_score = ( sample_mean - population_mean ) / \ ( population_std_dev / math . sqrt ( sample_size )) # Step 4: Result # Check if the absolute value of the test statistic is greater than the critical values if abs ( z_score ) > max ( abs ( critical_value_left ), abs ( critical_value_right )): print ( "Reject the null hypothesis." ) print ( "There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL." ) else : print ( "Fail to reject the null hypothesis." ) print ( "There is not enough evidence to conclude that the average cholesterol level in the population is different from 200 mg/dL." )

Reject the null hypothesis. There is statistically significant evidence that the average cholesterol level in the population is different from 200 mg/dL.

Limitations of Hypothesis Testing

  • Although a useful technique, hypothesis testing does not offer a comprehensive grasp of the topic being studied. Without fully reflecting the intricacy or whole context of the phenomena, it concentrates on certain hypotheses and statistical significance.
  • The accuracy of hypothesis testing results is contingent on the quality of available data and the appropriateness of statistical methods used. Inaccurate data or poorly formulated hypotheses can lead to incorrect conclusions.
  • Relying solely on hypothesis testing may cause analysts to overlook significant patterns or relationships in the data that are not captured by the specific hypotheses being tested. This limitation underscores the importance of complimenting hypothesis testing with other analytical approaches.

Hypothesis testing stands as a cornerstone in statistical analysis, enabling data scientists to navigate uncertainties and draw credible inferences from sample data. By systematically defining null and alternative hypotheses, choosing significance levels, and leveraging statistical tests, researchers can assess the validity of their assumptions. The article also elucidates the critical distinction between Type I and Type II errors, providing a comprehensive understanding of the nuanced decision-making process inherent in hypothesis testing. The real-life example of testing a new drug’s effect on blood pressure using a paired T-test showcases the practical application of these principles, underscoring the importance of statistical rigor in data-driven decision-making.

Frequently Asked Questions (FAQs)

1. what are the 3 types of hypothesis test.

There are three types of hypothesis tests: right-tailed, left-tailed, and two-tailed. Right-tailed tests assess if a parameter is greater, left-tailed if lesser. Two-tailed tests check for non-directional differences, greater or lesser.

2.What are the 4 components of hypothesis testing?

Null Hypothesis ( [Tex]H_o [/Tex] ): No effect or difference exists. Alternative Hypothesis ( [Tex]H_1 [/Tex] ): An effect or difference exists. Significance Level ( [Tex]\alpha [/Tex] ): Risk of rejecting null hypothesis when it’s true (Type I error). Test Statistic: Numerical value representing observed evidence against null hypothesis.

3.What is hypothesis testing in ML?

Statistical method to evaluate the performance and validity of machine learning models. Tests specific hypotheses about model behavior, like whether features influence predictions or if a model generalizes well to unseen data.

4.What is the difference between Pytest and hypothesis in Python?

Pytest purposes general testing framework for Python code while Hypothesis is a Property-based testing framework for Python, focusing on generating test cases based on specified properties of the code.

Please Login to comment...

Similar reads.

  • data-science
  • Best Twitch Extensions for 2024: Top Tools for Viewers and Streamers
  • Discord Emojis List 2024: Copy and Paste
  • Best Adblockers for Twitch TV: Enjoy Ad-Free Streaming in 2024
  • PS4 vs. PS5: Which PlayStation Should You Buy in 2024?
  • 15 Most Important Aptitude Topics For Placements [2024]

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

IMAGES

  1. Hypothesis testing tutorial using p value method

    hypothesis test distribution

  2. Hypothesis Testing in Finance: Concept and Examples

    hypothesis test distribution

  3. Hypothesis Testing

    hypothesis test distribution

  4. Everything You Need To Know about Hypothesis Testing

    hypothesis test distribution

  5. PPT

    hypothesis test distribution

  6. Hypothesis Testing with the Normal Distribution

    hypothesis test distribution

VIDEO

  1. Hypothesis Test for a Population Mean (T-Distribution)

  2. Hypothesis Testing with Normal Distribution

  3. HYPOTHESIS TESTING WITH NORMAL DISTRIBUTION

  4. Testing of Hypothesis Problem 1 MA3251 Statistics and Numerical Methods in Tamil Engineering Sem 2

  5. Hypothsis Testing in Statistics Part 2 Steps to Solving a Problem

  6. Hypothesis testing using critical regions

COMMENTS

  1. 9.4: Distribution Needed for Hypothesis Testing

    Assumptions. When you perform a hypothesis test of a single population mean \(\mu\) using a Student's \(t\)-distribution (often called a \(t\)-test), there are fundamental assumptions that need to be met in order for the test to work properly.Your data should be a simple random sample that comes from a population that is approximately normally distributed.

  2. 8.3: Sampling distribution and hypothesis testing

    Introduction. Understanding the relationship between sampling distributions, probability distributions, and hypothesis testing is the crucial concept in the NHST — Null Hypothesis Significance Testing — approach to inferential statistics. is crucial, and many introductory text books are excellent here. I will add some here to their discussion, perhaps with a different approach, but the ...

  3. How t-Tests Work: t-Values, t-Distributions, and Probabilities

    Hypothesis tests work by taking the observed test statistic from a sample and using the sampling distribution to calculate the probability of obtaining that test statistic if the null hypothesis is correct. In the context of how t-tests work, you assess the likelihood of a t-value using the t-distribution.

  4. Hypothesis Testing

    Hypothesis Testing | A Step-by-Step Guide with Easy ...

  5. 9.3 Distribution Needed for Hypothesis Testing

    Earlier, we discussed sampling distributions. Particular distributions are associated with hypothesis testing.We will perform hypotheses tests of a population mean using a normal distribution or a Student's t-distribution. (Remember, use a Student's t-distribution when the population standard deviation is unknown and the sample size is small, where small is considered to be less than 30 ...

  6. Distribution Needed for Hypothesis Testing

    Earlier in the course, we discussed sampling distributions. Particular distributions are associated with hypothesis testing. Perform tests of a population mean using a normal distribution or a Student's t-distribution. (Remember, use a Student's t-distribution when the population standard deviation is unknown and the distribution of the sample mean is approximately normal.)

  7. 9.3 Probability Distribution Needed for Hypothesis Testing

    Assumptions. When you perform a hypothesis test of a single population mean μ using a normal distribution (often called a z-test), you take a simple random sample from the population. The population you are testing is normally distributed, or your sample size is sufficiently large.You know the value of the population standard deviation, which, in reality, is rarely known.

  8. 7.4.1

    The p value will be the area on the z distribution that is more extreme than the test statistic of 2.542, in the direction of the alternative hypothesis. This is a two-tailed test: Dist r ibution Plot Normal , Mean = 0 , S t D e v=1 0.0 0 . 1 0 . 2 0 . 3 0.4 0 X Densi t y - 2 . 5 4 20 0 0. 0 0 5 5 1 1 0 0. 0 0 5 5 1 1 0 2 . 5 42

  9. Statistical hypothesis test

    Statistical hypothesis test

  10. 8.1.3: Distribution Needed for Hypothesis Testing

    Assumptions. When you perform a hypothesis test of a single population mean \(\mu\) using a Student's \(t\)-distribution (often called a \(t\)-test), there are fundamental assumptions that need to be met in order for the test to work properly.Your data should be a simple random sample that comes from a population that is approximately normally distributed.

  11. Data analysis: hypothesis testing: 4.1 The normal distribution

    In graph form, a normal distribution appears as a bell curve. The values in the x-axis of the normal distribution graph represent the z-scores. The test statistic that you wish to use to test the set of hypotheses is the z-score. A z-score is used to measure how far the observation (sample mean) is from the 0 value of the bell curve (population ...

  12. Step-by-step guide to hypothesis testing in statistics

    Hypothesis testing is a method for determining whether data supports a certain idea or assumption about a larger group. It starts by making a guess, like an average or a proportion, and then uses a small sample of data to see if that guess seems true or not. ... Understanding Normal Distribution In Statistics For Beginners; Mastering R Append ...

  13. Statistical Hypothesis Testing Overview

    Statistical Hypothesis Testing Overview

  14. One-Tailed and Two-Tailed Hypothesis Tests Explained

    The sampling distribution for a test statistic assumes that the null hypothesis is correct. Consequently, to represent the critical regions on the distribution for a test statistic, you merely shade the appropriate percentage of the distribution. For the common significance level of 0.05, you shade 5% of the distribution.

  15. 9.2: Hypothesis Testing

    Distribution Needed for Hypothesis Testing. Earlier in the course, we discussed sampling distributions. Particular distributions are associated with hypothesis testing. Perform tests of a population mean using a normal distribution or a Student's \(t\)-distribution. (Remember, use a Student's \(t\)-distribution when the population standard ...

  16. T-test and Hypothesis Testing (Explained Simply)

    T-test and Hypothesis Testing (Explained Simply)

  17. Normal Hypothesis Testing

    Step 3. Assuming the null hypothesis to be true, define the test statistic, usually. Step 4. Calculate either the critical value (s) or the p - value (probability of the observed value) for the test. Step 5. Compare the observed value of the test statistic with the critical value (s) or the p - value with the significance level.

  18. Introduction to Hypothesis Testing

    The null hypothesis, denoted as H 0, is the hypothesis that the sample data occurs purely from chance. The alternative hypothesis, denoted as H 1 or H a, is the hypothesis that the sample data is influenced by some non-random cause. Hypothesis Tests. A hypothesis test consists of five steps: 1. State the hypotheses. State the null and ...

  19. Hypothesis Testing

    Hypothesis Testing - Definition, Examples, Formula, Types

  20. 9.1: Introduction to Hypothesis Testing

    Hypothesis testing is a very general concept, but an important special class occurs when the distribution of the data variable X depends on a parameter θ taking values in a parameter space Θ. The parameter may be vector-valued, so that θ = (θ1, θ2, …, θn) and Θ ⊆ Rk for some k ∈ N +.

  21. How to Identify the Distribution of Your Data

    How to Identify the Distribution of Your Data

  22. 9.3 Distribution Needed for Hypothesis Testing

    Earlier in the course, we discussed sampling distributions. Particular distributions are associated with hypothesis testing. Perform tests of a population mean using a normal distribution or a Student's t-distribution. (Remember, use a Student's t-distribution when the population standard deviation is unknown and the distribution of the sample mean is approximately normal.)

  23. Hypothesis Testing in Statistics

    In other words, a hypothesis test at the 0.05 level will virtually always fail to reject the null hypothesis if the 95% confidence interval contains the predicted value. A hypothesis test at the 0.05 level will nearly certainly reject the null hypothesis if the 95% confidence interval does not include the hypothesized parameter.

  24. Kolmogorov-Smirnov test

    Kolmogorov-Smirnov test

  25. Understanding Hypothesis Testing

    Understanding Hypothesis Testing