Logo for Maricopa Open Digital Press

10 Chapter 10: Hypothesis Testing with Z

Setting up the hypotheses.

When setting up the hypotheses with z, the parameter is associated with a sample mean (in the previous chapter examples the parameters for the null used 0). Using z is an occasion in which the null hypothesis is a value other than 0. For example, if we are working with mothers in the U.S. whose children are at risk of low birth weight, we can use 7.47 pounds, the average birth weight in the US, as our null value and test for differences against that. For now, we will focus on testing a value of a single mean against what we expect from the population.

Using birthweight as an example, our null hypothesis takes the form: H 0 : μ = 7.47 Notice that we are testing the value for μ, the population parameter, NOT the sample statistic ̅X (or M). We are referring to the data right now in raw form (we have not standardized it using z yet). Again, using inferential statistics, we are interested in understanding the population, drawing from our sample observations. For the research question, we have a mean value from the sample to use, we have specific data is – it is observed and used as a comparison for a set point.

As mentioned earlier, the alternative hypothesis is simply the reverse of the null hypothesis, and there are three options, depending on where we expect the difference to lie. We will set the criteria for rejecting the null hypothesis based on the directionality (greater than, less than, or not equal to) of the alternative.

If we expect our obtained sample mean to be above or below the null hypothesis value (knowing which direction), we set a directional hypothesis. O ur alternative hypothesis takes the form based on the research question itself. In our example with birthweight, this could be presented as H A : μ > 7.47 or H A : μ < 7.47. 

Note that we should only use a directional hypothesis if we have a good reason, based on prior observations or research, to suspect a particular direction. When we do not know the direction, such as when we are entering a new area of research, we use a non-directional alternative hypothesis. In our birthweight example, this could be set as H A : μ ≠ 7.47

In working with data for this course we will need to set a critical value of the test statistic for alpha (α) for use of test statistic tables in the back of the book. This is determining the critical rejection region that has a set critical value based on α.

Determining Critical Value from α

We set alpha (α) before collecting data in order to determine whether or not we should reject the null hypothesis. We set this value beforehand to avoid biasing ourselves by viewing our results and then determining what criteria we should use.

When a research hypothesis predicts an effect but does not predict a direction for the effect, it is called a non-directional hypothesis . To test the significance of a non-directional hypothesis, we have to consider the possibility that the sample could be extreme at either tail of the comparison distribution. We call this a two-tailed test .

hypothesis testing z table

Figure 1. showing a 2-tail test for non-directional hypothesis for z for area C is the critical rejection region.

When a research hypothesis predicts a direction for the effect, it is called a directional hypothesis . To test the significance of a directional hypothesis, we have to consider the possibility that the sample could be extreme at one-tail of the comparison distribution. We call this a one-tailed test .

hypothesis testing z table

Figure 2. showing a 1-tail test for a directional hypothesis (predicting an increase) for z for area C is the critical rejection region.

Determining Cutoff Scores with Two-Tailed Tests

Typically we specify an α level before analyzing the data. If the data analysis results in a probability value below the α level, then the null hypothesis is rejected; if it is not, then the null hypothesis is not rejected. In other words, if our data produce values that meet or exceed this threshold, then we have sufficient evidence to reject the null hypothesis ; if not, we fail to reject the null (we never “accept” the null). According to this perspective, if a result is significant, then it does not matter how significant it is. Moreover, if it is not significant, then it does not matter how close to being significant it is. Therefore, if the 0.05 level is being used, then probability values of 0.049 and 0.001 are treated identically. Similarly, probability values of 0.06 and 0.34 are treated identically. Note we will discuss ways to address effect size (which is related to this challenge of NHST).

When setting the probability value, there is a special complication in a two-tailed test. We have to divide the significance percentage between the two tails. For example, with a 5% significance level, we reject the null hypothesis only if the sample is so extreme that it is in either the top 2.5% or the bottom 2.5% of the comparison distribution. This keeps the overall level of significance at a total of 5%. A one-tailed test does have such an extreme value but with a one-tailed test only one side of the distribution is considered.

hypothesis testing z table

Figure 3. Critical value differences in one and two-tail tests. Photo Credit

Let’s re view th e set critical values for Z.

We discussed z-scores and probability in chapter 8.  If we revisit the z-score for 5% and 1%, we can identify the critical regions for the critical rejection areas from the unit standard normal table.

  • A two-tailed test at the 5% level has a critical boundary Z score of +1.96 and -1.96
  • A one-tailed test at the 5% level has a critical boundary Z score of +1.64 or -1.64
  • A two-tailed test at the 1% level has a critical boundary Z score of +2.58 and -2.58
  • A one-tailed test at the 1% level has a critical boundary Z score of +2.33 or -2.33.

Review: Critical values, p-values, and significance level

There are two criteria we use to assess whether our data meet the thresholds established by our chosen significance level, and they both have to do with our discussions of probability and distributions. Recall that probability refers to the likelihood of an event, given some situation or set of conditions. In hypothesis testing, that situation is the assumption that the null hypothesis value is the correct value, or that there is no effec t. The value laid out in H 0 is our condition under which we interpret our results. To reject this assumption, and thereby reject the null hypothesis, we need results that would be very unlikely if the null was true.

Now recall that values of z which fall in the tails of the standard normal distribution represent unlikely values. That is, the proportion of the area under the curve as or more extreme than z is very small as we get into the tails of the distribution. Our significance level corresponds to the area under the tail that is exactly equal to α: if we use our normal criterion of α = .05, then 5% of the area under the curve becomes what we call the rejection region (also called the critical region) of the distribution. This is illustrated in Figure 4.

image

Figure 4: The rejection region for a one-tailed test

The shaded rejection region takes us 5% of the area under the curve. Any result which falls in that region is sufficient evidence to reject the null hypothesis.

The rejection region is bounded by a specific z-value, as is any area under the curve. In hypothesis testing, the value corresponding to a specific rejection region is called the critical value, z crit (“z-crit”) or z* (hence the other name “critical region”). Finding the critical value works exactly the same as finding the z-score corresponding to any area under the curve like we did in Unit 1. If we go to the normal table, we will find that the z-score corresponding to 5% of the area under the curve is equal to 1.645 (z = 1.64 corresponds to 0.0405 and z = 1.65 corresponds to 0.0495, so .05 is exactly in between them) if we go to the right and -1.645 if we go to the left. The direction must be determined by your alternative hypothesis, and drawing then shading the distribution is helpful for keeping directionality straight.

Suppose, however, that we want to do a non-directional test. We need to put the critical region in both tails, but we don’t want to increase the overall size of the rejection region (for reasons we will see later). To do this, we simply split it in half so that an equal proportion of the area under the curve falls in each tail’s rejection region. For α = .05, this means 2.5% of the area is in each tail, which, based on the z-table, corresponds to critical values of z* = ±1.96. This is shown in Figure 5.

image

Figure 5: Two-tailed rejection region

Thus, any z-score falling outside ±1.96 (greater than 1.96 in absolute value) falls in the rejection region. When we use z-scores in this way, the obtained value of z (sometimes called z-obtained) is something known as a test statistic, which is simply an inferential statistic used to test a null hypothesis.

Calculate the test statistic: Z

Now that we understand setting up the hypothesis and determining the outcome, let’s examine hypothesis testing with z!  The next step is to carry out the study and get the actual results for our sample. Central to hypothesis test is comparison of the population and sample means. To make our calculation and determine where the sample is in the hypothesized distribution we calculate the Z for the sample data.

Make a decision

To decide whether to reject the null hypothesis, we compare our sample’s Z score to the Z score that marks our critical boundary. If our sample Z score falls inside the rejection region of the comparison distribution (is greater than the z-score critical boundary) we reject the null hypothesis.

The formula for our z- statistic has not changed:

hypothesis testing z table

To formally test our hypothesis, we compare our obtained z-statistic to our critical z-value. If z obt > z crit , that means it falls in the rejection region (to see why, draw a line for z = 2.5 on Figure 1 or Figure 2) and so we reject H 0 . If z obt < z crit , we fail to reject. Remember that as z gets larger, the corresponding area under the curve beyond z gets smaller. Thus, the proportion, or p-value, will be smaller than the area for α, and if the area is smaller, the probability gets smaller. Specifically, the probability of obtaining that result, or a more extreme result, under the condition that the null hypothesis is true gets smaller.

Conversely, if we fail to reject, we know that the proportion will be larger than α because the z-statistic will not be as far into the tail. This is illustrated for a one- tailed test in Figure 6.

image

Figure 6. Relation between α, z obt , and p

When the null hypothesis is rejected, the effect is said to be statistically significant . Do not confuse statistical significance with practical significance. A small effect can be highly significant if the sample size is large enough.

Why does the word “significant” in the phrase “statistically significant” mean something so different from other uses of the word? Interestingly, this is because the meaning of “significant” in everyday language has changed. It turns out that when the procedures for hypothesis testing were developed, something was “significant” if it signified something. Thus, finding that an effect is statistically significant signifies that the effect is real and not due to chance. Over the years, the meaning of “significant” changed, leading to the potential misinterpretation.

Review: Steps of the Hypothesis Testing Process

The process of testing hypotheses follows a simple four-step procedure. This process will be what we use for the remained of the textbook and course, and though the hypothesis and statistics we use will change, this process will not.

Step 1: State the Hypotheses

Your hypotheses are the first thing you need to lay out. Otherwise, there is nothing to test! You have to state the null hypothesis (which is what we test) and the alternative hypothesis (which is what we expect). These should be stated mathematically as they were presented above AND in words, explaining in normal English what each one means in terms of the research question.

Step 2: Find the Critical Values

Next, we formally lay out the criteria we will use to test our hypotheses. There are two pieces of information that inform our critical values: α, which determines how much of the area under the curve composes our rejection region, and the directionality of the test, which determines where the region will be.

Step 3: Compute the Test Statistic

Once we have our hypotheses and the standards we use to test them, we can collect data and calculate our test statistic, in this case z . This step is where the vast majority of differences in future chapters will arise: different tests used for different data are calculated in different ways, but the way we use and interpret them remains the same.

Step 4: Make the Decision

Finally, once we have our obtained test statistic, we can compare it to our critical value and decide whether we should reject or fail to reject the null hypothesis. When we do this, we must interpret the decision in relation to our research question, stating what we concluded, what we based our conclusion on, and the specific statistics we obtained.

Example: Movie Popcorn

Let’s see how hypothesis testing works in action by working through an example. Say that a movie theater owner likes to keep a very close eye on how much popcorn goes into each bag sold, so he knows that the average bag has 8 cups of popcorn and that this varies a little bit, about half a cup. That is, the known population mean is μ = 8.00 and the known population standard deviation is σ =0.50. The owner wants to make sure that the newest employee is filling bags correctly, so over the course of a week he randomly assesses 25 bags filled by the employee to test for a difference (n = 25). He doesn’t want bags overfilled or under filled, so he looks for differences in both directions. This scenario has all of the information we need to begin our hypothesis testing procedure.

Our manager is looking for a difference in the mean cups of popcorn bags compared to the population mean of 8. We will need both a null and an alternative hypothesis written both mathematically and in words. We’ll always start with the null hypothesis:

H 0 : There is no difference in the cups of popcorn bags from this employee H 0 : μ = 8.00

Notice that we phrase the hypothesis in terms of the population parameter μ, which in this case would be the true average cups of bags filled by the new employee.

Our assumption of no difference, the null hypothesis, is that this mean is exactly

the same as the known population mean value we want it to match, 8.00. Now let’s do the alternative:

H A : There is a difference in the cups of popcorn bags from this employee H A : μ ≠ 8.00

In this case, we don’t know if the bags will be too full or not full enough, so we do a two-tailed alternative hypothesis that there is a difference.

Our critical values are based on two things: the directionality of the test and the level of significance. We decided in step 1 that a two-tailed test is the appropriate directionality. We were given no information about the level of significance, so we assume that α = 0.05 is what we will use. As stated earlier in the chapter, the critical values for a two-tailed z-test at α = 0.05 are z* = ±1.96. This will be the criteria we use to test our hypothesis. We can now draw out our distribution so we can visualize the rejection region and make sure it makes sense

image

Figure 7: Rejection region for z* = ±1.96

Step 3: Calculate the Test Statistic

Now we come to our formal calculations. Let’s say that the manager collects data and finds that the average cups of this employee’s popcorn bags is ̅X = 7.75 cups. We can now plug this value, along with the values presented in the original problem, into our equation for z:

So our test statistic is z = -2.50, which we can draw onto our rejection region distribution:

image

Figure 8: Test statistic location

Looking at Figure 5, we can see that our obtained z-statistic falls in the rejection region. We can also directly compare it to our critical value: in terms of absolute value, -2.50 > -1.96, so we reject the null hypothesis. We can now write our conclusion:

When we write our conclusion, we write out the words to communicate what it actually means, but we also include the average sample size we calculated (the exact location doesn’t matter, just somewhere that flows naturally and makes sense) and the z-statistic and p-value. We don’t know the exact p-value, but we do know that because we rejected the null, it must be less than α.

Effect Size

When we reject the null hypothesis, we are stating that the difference we found was statistically significant, but we have mentioned several times that this tells us nothing about practical significance. To get an idea of the actual size of what we found, we can compute a new statistic called an effect size. Effect sizes give us an idea of how large, important, or meaningful a statistically significant effect is.

For mean differences like we calculated here, our effect size is Cohen’s d :

hypothesis testing z table

Effect sizes are incredibly useful and provide important information and clarification that overcomes some of the weakness of hypothesis testing. Whenever you find a significant result, you should always calculate an effect size

Table 1. Interpretation of Cohen’s d

Example: Office Temperature

Let’s do another example to solidify our understanding. Let’s say that the office building you work in is supposed to be kept at 74 degree Fahrenheit but is allowed

to vary by 1 degree in either direction. You suspect that, as a cost saving measure, the temperature was secretly set higher. You set up a formal way to test your hypothesis.

You start by laying out the null hypothesis:

H 0 : There is no difference in the average building temperature H 0 : μ = 74

Next you state the alternative hypothesis. You have reason to suspect a specific direction of change, so you make a one-tailed test:

H A : The average building temperature is higher than claimed H A : μ > 74

image

Now that you have everything set up, you spend one week collecting temperature data:

You calculate the average of these scores to be 𝑋̅ = 76.6 degrees. You use this to calculate the test statistic, using μ = 74 (the supposed average temperature), σ = 1.00 (how much the temperature should vary), and n = 5 (how many data points you collected):

z = 76.60 − 74.00 = 2.60    = 5.78

          1.00/√5            0.45

This value falls so far into the tail that it cannot even be plotted on the distribution!

image

Figure 7: Obtained z-statistic

You compare your obtained z-statistic, z = 5.77, to the critical value, z* = 1.645, and find that z > z*. Therefore you reject the null hypothesis, concluding: Based on 5 observations, the average temperature (𝑋̅ = 76.6 degrees) is statistically significantly higher than it is supposed to be, z = 5.77, p < .05.

d = (76.60-74.00)/ 1= 2.60

The effect size you calculate is definitely large, meaning someone has some explaining to do!

Example: Different Significance Level

First, let’s take a look at an example phrased in generic terms, rather than in the context of a specific research question, to see the individual pieces one more time. This time, however, we will use a stricter significance level, α = 0.01, to test the hypothesis.

We will use 60 as an arbitrary null hypothesis value: H 0 : The average score does not differ from the population H 0 : μ = 50

We will assume a two-tailed test: H A : The average score does differ H A : μ ≠ 50

We have seen the critical values for z-tests at α = 0.05 levels of significance several times. To find the values for α = 0.01, we will go to the standard normal table and find the z-score cutting of 0.005 (0.01 divided by 2 for a two-tailed test) of the area in the tail, which is z crit * = ±2.575. Notice that this cutoff is much higher than it was for α = 0.05. This is because we need much less of the area in the tail, so we need to go very far out to find the cutoff. As a result, this will require a much larger effect or much larger sample size in order to reject the null hypothesis.

We can now calculate our test statistic.  The average of 10 scores is M = 60.40 with a µ = 60. We will use σ = 10 as our known population standard deviation. From this information, we calculate our z-statistic as:

Our obtained z-statistic, z = 0.13, is very small. It is much less than our critical value of 2.575. Thus, this time, we fail to reject the null hypothesis. Our conclusion would look something like:

Notice two things about the end of the conclusion. First, we wrote that p is greater than instead of p is less than, like we did in the previous two examples. This is because we failed to reject the null hypothesis. We don’t know exactly what the p- value is, but we know it must be larger than the α level we used to test our hypothesis. Second, we used 0.01 instead of the usual 0.05, because this time we tested at a different level. The number you compare to the p-value should always be the significance level you test at. Because we did not detect a statistically significant effect, we do not need to calculate an effect size. Note: some statisticians will suggest to always calculate effects size as a possibility of Type II error. Although insignificant, calculating d = (60.4-60)/10 = .04 which suggests no effect (and not a possibility of Type II error).

Review Considerations in Hypothesis Testing

Errors in hypothesis testing.

Keep in mind that rejecting the null hypothesis is not an all-or-nothing decision. The Type I error rate is affected by the α level: the lower the α level the lower the Type I error rate. It might seem that α is the probability of a Type I error. However, this is not correct. Instead, α is the probability of a Type I error given that the null hypothesis is true. If the null hypothesis is false, then it is impossible to make a Type I error. The second type of error that can be made in significance testing is failing to reject a false null hypothesis. This kind of error is called a Type II error.

Statistical Power

The statistical power of a research design is the probability of rejecting the null hypothesis given the sample size and expected relationship strength. Statistical power is the complement of the probability of committing a Type II error. Clearly, researchers should be interested in the power of their research designs if they want to avoid making Type II errors. In particular, they should make sure their research design has adequate power before collecting data. A common guideline is that a power of .80 is adequate. This means that there is an 80% chance of rejecting the null hypothesis for the expected relationship strength.

Given that statistical power depends primarily on relationship strength and sample size, there are essentially two steps you can take to increase statistical power: increase the strength of the relationship or increase the sample size. Increasing the strength of the relationship can sometimes be accomplished by using a stronger manipulation or by more carefully controlling extraneous variables to reduce the amount of noise in the data (e.g., by using a within-subjects design rather than a between-subjects design). The usual strategy, however, is to increase the sample size. For any expected relationship strength, there will always be some sample large enough to achieve adequate power.

Inferential statistics uses data from a sample of individuals to reach conclusions about the whole population. The degree to which our inferences are valid depends upon how we selected the sample (sampling technique) and the characteristics (parameters) of population data. Statistical analyses assume that sample(s) and population(s) meet certain conditions called statistical assumptions.

It is easy to check assumptions when using statistical software and it is important as a researcher to check for violations; if violations of statistical assumptions are not appropriately addressed then results may be interpreted incorrectly.

Learning Objectives

Having read the chapter, students should be able to:

  • Conduct a hypothesis test using a z-score statistics, locating critical region, and make a statistical decision including.
  • Explain the purpose of measuring effect size and power, and be able to compute Cohen’s d.

Exercises – Ch. 10

  • List the main steps for hypothesis testing with the z-statistic. When and why do you calculate an effect size?
  • z = 1.99, two-tailed test at α = 0.05
  • z = 1.99, two-tailed test at α = 0.01
  • z = 1.99, one-tailed test at α = 0.05
  • You are part of a trivia team and have tracked your team’s performance since you started playing, so you know that your scores are normally distributed with μ = 78 and σ = 12. Recently, a new person joined the team, and you think the scores have gotten better. Use hypothesis testing to see if the average score has improved based on the following 8 weeks’ worth of score data: 82, 74, 62, 68, 79, 94, 90, 81, 80.
  • A study examines self-esteem and depression in teenagers.  A sample of 25 teens with a low self-esteem are given the Beck Depression Inventory.  The average score for the group is 20.9.  For the general population, the average score is 18.3 with σ = 12.  Use a two-tail test with α = 0.05 to examine whether teenagers with low self-esteem show significant differences in depression.
  • You get hired as a server at a local restaurant, and the manager tells you that servers’ tips are $42 on average but vary about $12 (μ = 42, σ = 12). You decide to track your tips to see if you make a different amount, but because this is your first job as a server, you don’t know if you will make more or less in tips. After working 16 shifts, you find that your average nightly amount is $44.50 from tips. Test for a difference between this value and the population mean at the α = 0.05 level of significance.

Answers to Odd- Numbered Exercises – Ch. 10

1. List hypotheses. Determine critical region. Calculate z.  Compare z to critical region. Draw Conclusion.  We calculate an effect size when we find a statistically significant result to see if our result is practically meaningful or important

5. Step 1: H 0 : μ = 42 “My average tips does not differ from other servers”, H A : μ ≠ 42 “My average tips do differ from others”

Introduction to Statistics for Psychology Copyright © 2021 by Alisa Beyer is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

hypothesis testing z table

  • The Open University
  • Guest user / Sign out
  • Study with The Open University

My OpenLearn Profile

Personalise your OpenLearn profile, save your favourite content and get recognition for your learning

About this free course

Become an ou student, download this course, share this free course.

Data analysis: hypothesis testing

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

5 Mean and z-score ranges

In the previous section on hypothesis testing using the normal distribution, the z-score was frequently mentioned. This is because the z-score is the test statistic that is used to determine whether the null hypothesis should be accepted or rejected.

One way to gain an understanding of the calculated z-score and its alpha value is through the use of a z-score table. Using Excel functions, you can create a table that indicates z-scores and the corresponding area under the normal distribution curve. As you continue through the course, you will see how the z-score table can be used in hypothesis testing. For now, however, you will focus your attention on the use of Excel to create a table of z-scores, using the following steps.

Step 1 : In an Excel spreadsheet, you can create a z-score table yourself that shows the z-score associated with any level of probability. To give an illustration, the screenshot below (Figure 10) shows a table with the values – 0.0; 0,1; 0.2; 0.3; 0.4; 0.5 – in a row and the values – 0.00; 0.01; 0.02; 0.03; 0.04; 0.05; 0.06; 0.07; 0.08; 0.09 – in columns. These are the first and second decimal digits of the probability whose z-score will be in the table. For example, the z-score in the first row (0.0) and second column (0.01) in the as-yet empty table refers to the probability p=0.01 (i.e., 0.0+0.01=0.01). The table is really just a long list for probabilities from p=0 to p=0.5 formatted as a table.

A z-score table with no entries

A picture of a z-table with rows (labelled: 0.0; 0.1; 0.2; 0.3; 0.4; 0.5) and columns (labelled: 0.00; 0.01; 0.02; 0.03; 0.04, 0.05; 0.06; 0.07; 0.08; 0.09).

Step 2 : To use the ‘NORM.S.DIST’ Excel formula, begin by typing ‘=NORM.S.DIST(’ into the desired cell. The Excel software will prompt you to complete the formula’s value entry by entering the appropriate values for ‘z’ and ‘cumulative’.

A z-score table showing the entry of Excel formula ‘NORM.S.DIST(z, cumulative)’

A picture of a z-table with rows with rows (labelled: 0.0; 0.1; 0.2; 0.3; 0.4; 0.5) and columns (labelled: 0.00; 0.01; 0.02; 0.03; 0.04, 0.05; 0.06; 0.07; 0.08; 0.09). It shows the entry of Excel formula ‘NORM.S.DIST(z, cumulative)’.

Step 3 : After initiating the NORM.S.DIST formula in the designated cell, you can assign a value to ‘z’ by adding the value in the row cell to the value in the column cell.

A z-score table showing the entry of Excel formula ‘NORM.S.DIST($B4+C$3’

A picture of a z-table with rows (labelled: 0.0; 0.1; 0.2; 0.3; 0.4; 0.5) and columns (labelled: 0.00; 0.01; 0.02; 0.03; 0.04, 0.05; 0.06; 0.07; 0.08; 0.09). It shows the entry of value in the Excel formula ‘NORM.S.DIST($B4+C$3’.

Step 4 : To indicate the cumulative distribution function, set the ‘cumulative’ argument to ‘TRUE’.

A z-score table showing the entry of Excel formula ‘NORM.S.DIST($B4+C$3’ and selecting ‘TRUE - cumulative distribution function’

A picture of a z-table with rows (labelled: 0.0; 0.1; 0.2; 0.3; 0.4; 0.5) and columns (labelled: 0.00; 0.01; 0.02; 0.03; 0.04, 0.05; 0.06; 0.07; 0.08; 0.09). It shows the entry of value in the Excel formula ‘NORM.S.DIST($B4+C$3’ and selecting ‘TRUE - cumulative distribution function’.

A z-score table showing the entry of value in the Excel formula ‘NORM.S.DIST($B4+C$3

A picture of a z-table with rows (labelled: 0.0; 0.1; 0.2; 0.3; 0.4; 0.5) and columns (labelled: 0.00; 0.01; 0.02; 0.03; 0.04, 0.05; 0.06; 0.07; 0.08; 0.09). It shows the entry of value in the Excel formula ‘NORM.S.DIST($B4+C$3", TRUE)’.

Step 5 : After completing the formula with the appropriate values, press ‘Enter’ to calculate the result for the selected cell (Figure 15). To apply the Excel function to all cells within the table, click and drag the green box to cover the desired area (Figure 16).

A z-score table displaying the result 0.5000

A picture of a z-table with rows (labelled: 0.0; 0.1; 0.2; 0.3; 0.4; 0.5) and columns (labelled: 0.00; 0.01; 0.02; 0.03; 0.04, 0.05; 0.06; 0.07; 0.08; 0.09). After the calculation, it displays the result (0.5000).

A z-score table displaying all the results

A picture of a z-table with rows (labelled: 0.0; 0.1; 0.2; 0.3; 0.4; 0.5) and columns (labelled: 0.00; 0.01; 0.02; 0.03; 0.04, 0.05; 0.06; 0.07; 0.08; 0.09). All of the results are displayed after the calculation has been completed as follows:

In the following activity, you can practice using these Excel formulas to create a table of values of the normal distribution that correspond to a range of z-scores. These will be useful when testing hypotheses later.

Activity 5 Z-Score table

Using steps 1 to 5 described above, create a z-score table in Excel for z-scores ranging from -3 to 3.

Once you have completed these steps, reveal the discussion and compare your answers.

The following table illustrates a z-score that ranges from -3 to 3. The following is what you should have accomplished as a result of this activity.

Previous

z-scoretable.com

Critical Z-Values: Gatekeepers in Hypothesis Testing

The world of statistics is full of tools, techniques, and terminologies that help us navigate the vast seas of data and draw meaningful conclusions. Among the plethora of statistical terms, ‘critical Z-values’ stands out, especially when venturing into the domain of hypothesis testing. To comprehend the importance and application of critical Z-values, let’s dive deeper into its essence and workings.

Z-Values: A Brief Refresher

At its foundation, a Z-value, or Z-score, indicates how many standard deviations a data point is away from the mean in a given dataset. This score can be both positive and negative, denoting values above or below the mean, respectively.

Discover how to calculate the z-score

What Are Critical Z-Values?

Critical Z-values, often termed critical values, are threshold values set on a standard normal distribution curve. These values, usually one on the left (negative) and one on the right (positive), effectively create a boundary or region. When testing hypotheses, if your test statistic falls within this region, you would reject the null hypothesis in favor of the alternative hypothesis.

The selection of these critical values is directly linked to the significance level (often denoted as α) that the researcher or analyst has chosen. Commonly used significance levels include 0.05, 0.01, and 0.10.

Critical Z-Values

The Mechanics of Critical Z-Values

To understand this concept better, let’s take the commonly used significance level of 0.05. If you’re conducting a two-tailed test (which means you’re considering extreme values on both ends of the distribution), you’d split this α into two, placing 0.025 in each tail. Using a Z-table or statistical software, you’d then identify the Z-values that correspond to these tail areas. For a significance level of 0.05, the critical Z-values are typically -1.96 and +1.96.

Critical Z-Values in Hypothesis Testing

Hypothesis testing is a methodological process where statisticians make an initial assumption (the null hypothesis) and test the validity of this assumption based on sample data.

The process typically follows these steps:

  • State the Hypotheses: Formulate the null (Ho) and alternative hypotheses (Ha).
  • Choose the Significance Level (α): Decide the threshold for rejecting the null hypothesis.
  • Select the Test and Find the Critical Value(s): For a Z-test, this would be the critical Z-value.
  • Compute the Test Statistic: Calculate the Z-score for your sample data.
  • Make a Decision: If your test statistic falls within the critical region (beyond the critical Z-values), you’d reject the null hypothesis.

Examples and Applications

Example 1: Imagine a shoe manufacturer claims their shoes last an average of 365 days before showing significant wear. A competitor believes these shoes wear out faster and conducts a study with a sample of shoes. Using a significance level of 0.05 for a two-tailed test, the critical Z-values are -1.96 and +1.96. If the Z-score calculated from the sample data is -2.1 (indicating the shoes wore out faster than claimed), the null hypothesis would be rejected since -2.1 falls outside the critical values.

Example 2: A beverage company claims its juice box contains 250 ml of juice. A consumer group, suspecting the company overstates the quantity, tests a sample. Using a one-tailed test at α = 0.05 (because they only care if the juice box contains less than claimed), the critical Z-value for this test would be -1.645. If their sample calculation results in a Z-score of -1.8, the null hypothesis would be rejected, suggesting the juice boxes contain less than the stated amount.

Why Are Critical Z-Values Important?

Critical Z-values serve as gatekeepers. They provide the boundary beyond which the observed data is considered rare or unusual under the assumption that the null hypothesis is true. By setting these boundaries, statisticians have a clear framework to determine whether to reject the null hypothesis or fail to reject it.

Find out everything you need to know about standard deviation

Critical Z-values, while just numbers on the surface, are pivotal in hypothesis testing, guiding decisions and providing clarity. They act as the yardstick against which observed data is measured, helping determine the validity of initial assumptions. In a world drowning in data, tools like critical Z-values help sieve through the noise, enabling researchers, analysts, and professionals to draw significant, actionable insights. As the backbone of hypothesis testing, understanding and applying critical Z-values is indispensable for anyone seeking to make informed decisions based on data.

Leave a Comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

hypothesis testing z table

hypothesis testing z table

Hypothesis Testing for Means & Proportions

  •   1  
  • |   2  
  • |   3  
  • |   4  
  • |   5  
  • |   6  
  • |   7  
  • |   8  
  • |   9  
  • |   10  

On This Page sidebar

Hypothesis Testing: Upper-, Lower, and Two Tailed Tests

Type i and type ii errors.

Learn More sidebar

All Modules

More Resources sidebar

Z score Table

t score Table

The procedure for hypothesis testing is based on the ideas described above. Specifically, we set up competing hypotheses, select a random sample from the population of interest and compute summary statistics. We then determine whether the sample data supports the null or alternative hypotheses. The procedure can be broken down into the following five steps.  

  • Step 1. Set up hypotheses and select the level of significance α.

H 0 : Null hypothesis (no change, no difference);  

H 1 : Research hypothesis (investigator's belief); α =0.05

  • Step 2. Select the appropriate test statistic.  

The test statistic is a single number that summarizes the sample information.   An example of a test statistic is the Z statistic computed as follows:

When the sample size is small, we will use t statistics (just as we did when constructing confidence intervals for small samples). As we present each scenario, alternative test statistics are provided along with conditions for their appropriate use.

  • Step 3.  Set up decision rule.  

The decision rule is a statement that tells under what circumstances to reject the null hypothesis. The decision rule is based on specific values of the test statistic (e.g., reject H 0 if Z > 1.645). The decision rule for a specific test depends on 3 factors: the research or alternative hypothesis, the test statistic and the level of significance. Each is discussed below.

  • The decision rule depends on whether an upper-tailed, lower-tailed, or two-tailed test is proposed. In an upper-tailed test the decision rule has investigators reject H 0 if the test statistic is larger than the critical value. In a lower-tailed test the decision rule has investigators reject H 0 if the test statistic is smaller than the critical value.  In a two-tailed test the decision rule has investigators reject H 0 if the test statistic is extreme, either larger than an upper critical value or smaller than a lower critical value.
  • The exact form of the test statistic is also important in determining the decision rule. If the test statistic follows the standard normal distribution (Z), then the decision rule will be based on the standard normal distribution. If the test statistic follows the t distribution, then the decision rule will be based on the t distribution. The appropriate critical value will be selected from the t distribution again depending on the specific alternative hypothesis and the level of significance.  
  • The third factor is the level of significance. The level of significance which is selected in Step 1 (e.g., α =0.05) dictates the critical value.   For example, in an upper tailed Z test, if α =0.05 then the critical value is Z=1.645.  

The following figures illustrate the rejection regions defined by the decision rule for upper-, lower- and two-tailed Z tests with α=0.05. Notice that the rejection regions are in the upper, lower and both tails of the curves, respectively. The decision rules are written below each figure.

Standard normal distribution with lower tail at -1.645 and alpha=0.05

Rejection Region for Lower-Tailed Z Test (H 1 : μ < μ 0 ) with α =0.05

The decision rule is: Reject H 0 if Z < 1.645.

Standard normal distribution with two tails

Rejection Region for Two-Tailed Z Test (H 1 : μ ≠ μ 0 ) with α =0.05

The decision rule is: Reject H 0 if Z < -1.960 or if Z > 1.960.

The complete table of critical values of Z for upper, lower and two-tailed tests can be found in the table of Z values to the right in "Other Resources."

Critical values of t for upper, lower and two-tailed tests can be found in the table of t values in "Other Resources."

  • Step 4. Compute the test statistic.  

Here we compute the test statistic by substituting the observed sample data into the test statistic identified in Step 2.

  • Step 5. Conclusion.  

The final conclusion is made by comparing the test statistic (which is a summary of the information observed in the sample) to the decision rule. The final conclusion will be either to reject the null hypothesis (because the sample data are very unlikely if the null hypothesis is true) or not to reject the null hypothesis (because the sample data are not very unlikely).  

If the null hypothesis is rejected, then an exact significance level is computed to describe the likelihood of observing the sample data assuming that the null hypothesis is true. The exact level of significance is called the p-value and it will be less than the chosen level of significance if we reject H 0 .

Statistical computing packages provide exact p-values as part of their standard output for hypothesis tests. In fact, when using a statistical computing package, the steps outlined about can be abbreviated. The hypotheses (step 1) should always be set up in advance of any analysis and the significance criterion should also be determined (e.g., α =0.05). Statistical computing packages will produce the test statistic (usually reporting the test statistic as t) and a p-value. The investigator can then determine statistical significance using the following: If p < α then reject H 0 .  

  • Step 1. Set up hypotheses and determine level of significance

H 0 : μ = 191 H 1 : μ > 191                 α =0.05

The research hypothesis is that weights have increased, and therefore an upper tailed test is used.

  • Step 2. Select the appropriate test statistic.

Because the sample size is large (n > 30) the appropriate test statistic is

  • Step 3. Set up decision rule.  

In this example, we are performing an upper tailed test (H 1 : μ> 191), with a Z test statistic and selected α =0.05.   Reject H 0 if Z > 1.645.

We now substitute the sample data into the formula for the test statistic identified in Step 2.  

We reject H 0 because 2.38 > 1.645. We have statistically significant evidence at a =0.05, to show that the mean weight in men in 2006 is more than 191 pounds. Because we rejected the null hypothesis, we now approximate the p-value which is the likelihood of observing the sample data if the null hypothesis is true. An alternative definition of the p-value is the smallest level of significance where we can still reject H 0 . In this example, we observed Z=2.38 and for α=0.05, the critical value was 1.645. Because 2.38 exceeded 1.645 we rejected H 0 . In our conclusion we reported a statistically significant increase in mean weight at a 5% level of significance. Using the table of critical values for upper tailed tests, we can approximate the p-value. If we select α=0.025, the critical value is 1.96, and we still reject H 0 because 2.38 > 1.960. If we select α=0.010 the critical value is 2.326, and we still reject H 0 because 2.38 > 2.326. However, if we select α=0.005, the critical value is 2.576, and we cannot reject H 0 because 2.38 < 2.576. Therefore, the smallest α where we still reject H 0 is 0.010. This is the p-value. A statistical computing package would produce a more precise p-value which would be in between 0.005 and 0.010. Here we are approximating the p-value and would report p < 0.010.                  

In all tests of hypothesis, there are two types of errors that can be committed. The first is called a Type I error and refers to the situation where we incorrectly reject H 0 when in fact it is true. This is also called a false positive result (as we incorrectly conclude that the research hypothesis is true when in fact it is not). When we run a test of hypothesis and decide to reject H 0 (e.g., because the test statistic exceeds the critical value in an upper tailed test) then either we make a correct decision because the research hypothesis is true or we commit a Type I error. The different conclusions are summarized in the table below. Note that we will never know whether the null hypothesis is really true or false (i.e., we will never know which row of the following table reflects reality).

Table - Conclusions in Test of Hypothesis

In the first step of the hypothesis test, we select a level of significance, α, and α= P(Type I error). Because we purposely select a small value for α, we control the probability of committing a Type I error. For example, if we select α=0.05, and our test tells us to reject H 0 , then there is a 5% probability that we commit a Type I error. Most investigators are very comfortable with this and are confident when rejecting H 0 that the research hypothesis is true (as it is the more likely scenario when we reject H 0 ).

When we run a test of hypothesis and decide not to reject H 0 (e.g., because the test statistic is below the critical value in an upper tailed test) then either we make a correct decision because the null hypothesis is true or we commit a Type II error. Beta (β) represents the probability of a Type II error and is defined as follows: β=P(Type II error) = P(Do not Reject H 0 | H 0 is false). Unfortunately, we cannot choose β to be small (e.g., 0.05) to control the probability of committing a Type II error because β depends on several factors including the sample size, α, and the research hypothesis. When we do not reject H 0 , it may be very likely that we are committing a Type II error (i.e., failing to reject H 0 when in fact it is false). Therefore, when tests are run and the null hypothesis is not rejected we often make a weak concluding statement allowing for the possibility that we might be committing a Type II error. If we do not reject H 0 , we conclude that we do not have significant evidence to show that H 1 is true. We do not conclude that H 0 is true.

Lightbulb icon signifying an important idea

 The most common reason for a Type II error is a small sample size.

return to top | previous page | next page

Content ©2017. All Rights Reserved. Date last modified: November 6, 2017. Wayne W. LaMorte, MD, PhD, MPH

  • Prompt Library
  • DS/AI Trends
  • Stats Tools
  • Interview Questions
  • Generative AI
  • Machine Learning
  • Deep Learning

Z-tests for Hypothesis testing: Formula & Examples

Different types of Z-test - One sample and two samples

Z-tests are statistical hypothesis testing techniques that are used to determine whether the null hypothesis relating to comparing sample means or proportions with that of population at a given significance level can be rejected or otherwise based on the z-statistics or z-score. As a data scientist , you must get a good understanding of the z-tests and its applications to test the hypothesis for your statistical models. In this blog post, we will discuss an overview of different types of z-tests and related concepts with the help of examples. You may want to check my post on hypothesis testing titled – Hypothesis testing explained with examples

Table of Contents

What are Z-tests & Z-statistics?

Z-tests can be defined as statistical hypothesis testing techniques that are used to quantify the hypothesis testing related to claim made about the population parameters such as mean and proportion. Z-test uses the sample data to test the hypothesis about the population parameters (mean or proportion). There are different types of Z-tests which are used to estimate the population mean or proportion, or, perform hypotheses testing related to samples’ means or proportions.

Different types of Z-tests 

There are following different types of Z-tests which are used to perform different types of hypothesis testing.  

Different types of Z-test - One sample and two samples

  • One-sample Z-test for means
  • Two-sample Z-test for means
  • One sample Z-test for proportion
  • Two sample Z-test for proportions

Four variables are involved in the Z-test for performing hypothesis testing for different scenarios. They are as follows:

  • An independent variable that is called the “sample” and assumed to be normally distributed;
  • A dependent variable that is known as the test statistic (Z) and calculated based on sample data
  • Different types of Z-test that can be used for performing hypothesis testing
  • A significance level or “alpha” is usually set at 0.05 but can take the values such as 0.01, 0.05, 0.1

When to use Z-test – Explained with examples

The following are different scenarios when Z-test can be used:

  • Compare the sample or a single group with that of the population with respect to the parameter, mean. This is called as one-sample Z-test for means. For example, whether the student of a particular school has been scoring marks in Mathematics which is statistically significant than the other schools. This can also be thought of as a hypothesis test to check whether the sample belongs to the population or otherwise.
  • Compare two groups with respect to the population parameter, mean. This is called as two-samples Z-test for means. For example, you want to compare class X students from different schools and determine if students of one school are better than others based on their score of Mathematics.
  • Compare hypothesized proportion of the population to that of population theoritical proportion. For example, whether the unemployment rate of a given state is different than the well-established rate for the ccountry
  • Compare the proportion of one population with the proportion of othe rproportion. For example, whether the efficacy rate of vaccination in two different population are statistically significant or otherwise.

Z-test Interview Questions 

Here is a list of a few interview questions you may expect in your data scientists interview:

  • What is Z-test?
  • What is Z-statistics or Z-score?
  • When to use Z-test vs other tests such as T-test or Chi-square test?
  • What is Z-distribution?
  • What is the difference between Z-distribution and T-distribution?
  • What is sampling distribution?
  • What are different types of Z-tests?
  • Explain different types of Z-tests with the help of real-world examples?
  • What’s the difference two samples Z-test for means and two-samples Z-test for proportions? Explain with one example each.
  • As data scientists, give some scenarios when you would like to use Z-test when building machine learning models?

Recent Posts

Ajitesh Kumar

  • How to Learn Effectively: A Holistic Approach - May 13, 2024
  • How to Choose Right Statistical Tests: Examples - May 13, 2024
  • Data Lakehouses Fundamentals & Examples - May 12, 2024

Ajitesh Kumar

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

  • Search for:
  • Excellence Awaits: IITs, NITs & IIITs Journey

ChatGPT Prompts (250+)

  • Generate Design Ideas for App
  • Expand Feature Set of App
  • Create a User Journey Map for App
  • Generate Visual Design Ideas for App
  • Generate a List of Competitors for App
  • How to Learn Effectively: A Holistic Approach
  • How to Choose Right Statistical Tests: Examples
  • Data Lakehouses Fundamentals & Examples
  • Machine Learning Lifecycle: Data to Deployment Example
  • Autoencoder vs Variational Autoencoder (VAE): Differences, Example

Data Science / AI Trends

  • • Prepend any arxiv.org link with talk2 to load the paper into a responsive chat application
  • • Custom LLM and AI Agents (RAG) On Structured + Unstructured Data - AI Brain For Your Organization
  • • Guides, papers, lecture, notebooks and resources for prompt engineering
  • • Common tricks to make LLMs efficient and stable
  • • Machine learning in finance

Free Online Tools

  • Create Scatter Plots Online for your Excel Data
  • Histogram / Frequency Distribution Creation Tool
  • Online Pie Chart Maker Tool
  • Z-test vs T-test Decision Tool
  • Independent samples t-test calculator

Recent Comments

I found it very helpful. However the differences are not too understandable for me

Very Nice Explaination. Thankyiu very much,

in your case E respresent Member or Oraganization which include on e or more peers?

Such a informative post. Keep it up

Thank you....for your support. you given a good solution for me.

Statology

Statistics Made Easy

How to Calculate a P-Value from a Z-Score by Hand

In most cases, when you find a z-score in statistics you can simply use a Z Score to P-Value Calculator to find the corresponding p-value.

However, sometimes you may be forced to calculate a p-value from a z-score by hand. In this case, you need to use the values found in a z table .

The following examples show how to calculate a p-value from a z-score by hand using a z-table.

Example 1: Find P-Value for a Left-Tailed Test

Suppose we conduct a left-tailed hypothesis test and get a z-score of  -1.22 . What is the p-value that corresponds to this z-score?

To find the p-value, we can simply locate the value  -1.22 in the z table :

hypothesis testing z table

The p-value that corresponds to a z-score of -1.22 is  0.1112 .

Example 2: Find P-Value for a Right-Tailed Test

Suppose we conduct a right-tailed hypothesis test and get a z-score of 1.43 . What is the p-value that corresponds to this z-score?

To find the p-value, we can first locate the value 1.43  in the z table :

P-value from z-score by hand

Since we’re conducting a right-tailed test, we can then subtract this value from 1.

So our final p-value is: 1 – 0.9236 = 0.0764 .

Example 3: Find P-Value for a Two-Tailed Test

Suppose we conduct a two-tailed hypothesis test and get a z-score of -0.84 . What is the p-value that corresponds to this z-score?

To find the p-value, we can first locate the value -0.84  in the z table :

hypothesis testing z table

Since we’re conducting a two-tailed test, we can then multiply this value by 2.

So our final p-value is: 0.2005 * 2 =  0.401 .

Additional Resources

The following tutorials explain how to calculate p-values from z-scores using various statistical software:

How to Find a P-Value from a Z-Score in Excel How to Find a P-Value of a Z-Score in R How to Find a P-Value from a Z-Score in Python

Featured Posts

Statistics Cheat Sheets to Get Before Your Job Interview

Hey there. My name is Zach Bobbitt. I have a Masters of Science degree in Applied Statistics and I’ve worked on machine learning algorithms for professional businesses in both healthcare and retail. I’m passionate about statistics, machine learning, and data visualization and I created Statology to be a resource for both students and teachers alike.  My goal with this site is to help you learn statistics through using simple terms, plenty of real-world examples, and helpful illustrations.

3 Replies to “How to Calculate a P-Value from a Z-Score by Hand”

this is literally not what the title says thanks man

This is really helpful. Thank you.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Join the Statology Community

I have read and agree to the terms & conditions

Z Table. Z Score Table. Normal Distribution Table. Standard Normal Table.

Negative Z score table

negative-z-score-graph

Use the negative Z score table below to find values on the left of the mean as can be seen in the graph alongside. Corresponding values which are less than the mean are marked with a negative score in the z-table and respresent the area under the bell curve to the left of z.

negative-z-table

Positive Z score table

positive-z-score-graph

Use the positive Z score table below to find values on the right of the mean as can be seen in the graph alongside. Corresponding values which are greater than the mean are marked with a positive score in the z-table and respresent the area under the bell curve to the left of z.

Positive-z-table

Note: Feel free to use and share the above images as long as you provide attribution to our site by crediting a link to  https://www.ztable.net

How to use the Z Score Formula

To use the Z-Tables however, you will need to know a little something called the Z-Score. It is the Z-Score that gets mapped across the Z-Table and is usually either pre-provided or has to be derived using the Z Score formula. But before we take a look at the formula, let us understand what the Z Score is

What is a Z Score?

A Z Score, also called as the Standard Score, is a measurement of how many standard deviations below or above the population mean a raw score is. Meaning in simple terms, it is Z Score that gives you an idea of a value’s relationship to the mean and how far from the mean a data point is.

A Z Score is measured in terms of standard deviations from the mean. Which means that if Z Score = 1 then that value is one standard deviation from the mean. Whereas if Z Score = 0, it means the value is identical to the mean.

A Z Score can be either positive or negative depending on whether the score lies above the mean (in which case it is positive) or below the mean (in which case it is negative)

Z Score helps us compare results to the normal population or mean

The Z Score Formula

The Z Score Formula or the Standard Score Formula is given as

hypothesis testing z table

When we do not have a pre-provided Z Score supplied to us, we will use the above formula to calculate the Z Score using the other data available like the observed value, mean of the sample and the standard deviation. Similarly, if we have the standard score provided and are missing any one of the other three values, we can substitute them in the above formula to get the missing value.

Understanding how to use the Z Score Formula with an example

Let us understand how to calculate the Z-score, the Z-Score Formula and use the Z-table with a simple real life example.

Q: 300 college student’s exam scores are tallied at the end of the semester. Eric scored 800 marks (X) in total out of 1000. The average score for the batch was 700 (µ) and the standard deviation was 180 (σ). Let’s find out how well Eric scored compared to his batch mates.

Using the above data we need to first standardize his score and use the respective z-table before we determine how well he performed compared to his batch mates.

To find out the Z score we use the formula

Z Score = (Observed Value – Mean of the Sample)/standard deviation

Z score = ( x – µ ) / σ

Z score = (800-700) / 180

Z score = 0.56

Once we have the Z Score which was derived through the Z Score formula, we can now go to the next part which is understanding how to read the Z Table and map the value of the Z Score we’ve got, using it.

How to Read The Z Table

To map a Z score across a Z Table, it goes without saying that the first thing you need is the Z Score itself. In the above example, we derive that Eric’s Z-score is 0.56.

Once you have the Z Score, the next step is choosing between the two tables. That is choosing between using the negative Z Table and the positive Z Table depending on whether your Z score value is positive or negative.

What we are basically establishing with a positive or negative Z Score is whether your values lie on the left of the mean or right of the mean. To find the area on the left of the mean, you will have a negative Z Score and use a negative Z Table. Similarly, to find the area on the right of the mean, you will have a positive Z Score and use a positive Z Table.

Now that we have Eric’s Z score which we know is a positive 0.56 and we know which corresponding table to pick for it, we will make use of the positive Z-table (Table 1.2) to predict how good or bad Eric performed compared to his batch mates.

Now that we’ve picked the appropriate table to look up to, in the next step of the process we will learn how to map our Z score value in the respective table. Let us understand using the example we’ve chosen with Eric’s Z score of 0.56

Traverse horizontally down the Y-Axis on the leftmost column to find the find the value of the first two digits of your Z Score (0.5 based on Eric’s Z score).

hypothesis testing z table

Once you have that, go alongside the X-axis on the topmost row to find the value of the digits at the second decimal position (.06 based on Eric’s Z score)

hypothesis testing z table

Once you have mapped these two values, find the interesection of the row of the first two digits and column of the second decimal value in the table. The instersection of the two is the answer we’re looking.

hypothesis testing z table

In our example, we get the interesection at a value of 0.71226 (~ 0.7123)

To get this as a percentage we multiply that number with 100. Therefore 0.7123 x 100 = 71.23%. Hence we find out that Eric did better than 71.23% of students.

Let us take one more example but this time for a negative z score and a negative z table.

Let us consider our Z score = -1.35

Based on what we had discussed before, since the z score is negative, we will use the negative z table (Table 1.1)

First, traverse horizontally down the Y-Axis on the leftmost column to find the value of the first two digits that is -1.3

hypothesis testing z table

Once we have that, we will traverse along the X axis in the topmost row to map the second decimal (0.05 in the case) and find the corresponding column for it.

hypothesis testing z table

The interesection of the row of the first two digits and column of the second decimal value in the above Z table is the anwer we’re looking for which in case of our example is 0.08851 or 8.85%

hypothesis testing z table

(Note that this method of mapping the Z Score value is same for both the positive as well as the negative Z Scores. That is because for a standard normal distribution table, both halfs of the curves on the either side of the mean are identical. So it only depends on whether the Z Score Value is positive or negative or whether we are looking up the area on the left of the mean or on the right of the mean when it comes to choosing the respective table)

Why are there two Z tables?

There are two Z tables to make things less complicated. Sure it can be combined into one single larger Z-table but that can be a bit overwhelming for a lot of beginners and it also increases the chance of human errors during calculations. Using two Z tables makes life easier such that based on whether you want the know the area from the mean for a positive value or a negative value, you can use the respective Z score table.

If you want to know the area between the mean and a negative value you will use the first table (1.1) shown above which is the left-hand/negative Z-table. If you want to know the area between the mean and a positive value you will the second table (1.2) above which is the right-hand/positive Z-table.

What is Standard Deviation? (σ)

Standard Deviation denoted by the symbol (σ) , the greek letter for sigma, is nothing but the square root of the Variance. Whereas Variance is average of the squared differences from the Mean.

Sample Questions For Practice 

1. What is P (Z ≥ 1.2 0 )

Answer: 0.11507

To find out the answer using the above Z-table, we will first look at the corresponding value for the first two digits on the Y axis which is 1.2 and then go to the X axis for find the value for the second decimal which is 0.00. Hence we get the score as 0.11507

2. What is P (Z ≤ 1.20)

( S ame as above using the other table. Try solving this yourself for practice)

Answer: 0.88493

History of Standard Normal Distribution Table

The credit for the discovery, origin and penning down the Standard Normal Distribution can be attributed to the 16th century French mathematician Abraham de Moivre ( 26th May 1667 – 27th November 1754) who is well known for his ‘de Moivre’s formula’ which links complex numbers and trigonometry.

hypothesis testing z table

De Moivre came about to create the normal distribution through his scientific and math based approach to the gambling. He was trying to come up with a mathematical expression for finding the probabilities of coin flips and various inquisitive aspects of gambling.

He discovered that although data sets can have a wide range of values, we can ‘standardize’ it using a bell shaped distribution curve which makes it easier to analyze data by setting it to a mean of zero and a standard deviation of one. This bell shaped distribution curve that he discovered ended up being known as the normal curve.

hypothesis testing z table

This discovery was extremely useful and was put to use by other mathematicians in the years to follow. It was realized that normal distribution applied to a large number of mathematical and real life phenomenas. For example, Belgian astronomer, Lambert Quetelet (22 nd February 1796 to 17 th February 1874) discovered that despite people’s height, weight and strength presents a big range of dataset with people’s height ranging from 3 to 8 feet and with weight’s ranging from few pounds to few hundred pounds, there was a strong link between people’s height, weight and strength following a standard normal distribution curve.

hypothesis testing z table

The normal curve was used not only to standardize the data sets but also to analyze errors and in error distribution patterns. For example, the normal curve was use to analyze errors in astronomical observation measurements. Galileo discovered that the errors were symmetric in nature and in nineteenth century it was realized that even the errors showed a pattern of normal distribution.

hypothesis testing z table

The same distribution was also discovered in the late 18 th century by the renowned French mathematician Laplace ( Pierre-Simon, marquis de Laplace; 23 rd March 1749 to 5 th March 1827). Laplace’s central limit theorem states that the distribution of sample means follows the standard normal distribution and that the large the data set the more the distribution deviates towards normal distribution.

Whereas in probability theory a special case of the central limit theorem known as the de Moivre-Laplace theorem states that the normal distribution may be used as an approximation to the binomial distribution under certain conditions. This theorem appears in the second edition pf the book published in 1738 by Abraham de Moivre titled ‘Doctrine of Chances’.

normal-distribution-chart

Tags: z table, z score table, normal distribution table, standard normal table, standard normal distribution table, z-table, z-score table, z transform table, ztable,normal table,z value table, z distribution table, z tables, z scores tables, zscore table, z table normal distribution, standard deviation table, z table statistics, z table chart, standard distribution table, z score chart, z-score chart.

IMAGES

  1. Understanding Hypothesis Testing. From Sampling Distribution to Central

    hypothesis testing z table

  2. Z Test

    hypothesis testing z table

  3. 7 Images Z Score Table Two Tailed And Description

    hypothesis testing z table

  4. Hypothesis Testing Formula

    hypothesis testing z table

  5. Z Test

    hypothesis testing z table

  6. Guide to Hypothesis Testing for Data Scientists

    hypothesis testing z table

VIDEO

  1. TWO SAMPLE HYPOTHESIS TESTING

  2. Hypothesis testing (z-test and t-test)

  3. Hypothesis Testing (Z-Test)

  4. HYPOTHESIS TESTING PROBLEM-9 USING Z TEST VIDEO-12

  5. 8 Hypothesis testing| Z-test |Two Independent Samples with MS Excel

  6. Hypothesis testing: z test for single population

COMMENTS

  1. Z-table

    For a one-tailed z-test, look in the negative z-table for the area that equals the alpha of 0.05. In the truncated negative z-table, I've highlighted a cell close to our target alpha of 0.05. The area is 0.04947. This area is at the row and column intersection for the z-value of -1.65. That's our critical value!

  2. How to use the Z Table (With Examples)

    Step 1: Find the z-score. First, we will find the z-score associated with an exam score of 84: z-score = (x - μ) / σ = (84 - 82) / 8 = 2 / 8 = 0.25. Step 2: Use the z-table to find the percentage that corresponds to the z-score. Next, we will look up the value 0.25 in the z-table: Approximately 59.87% of students score less than 84 on ...

  3. Z Test

    This test is widely used to determine whether the mean of the two samples are different when the variance is known. We make use of the Z score and the Z table for running the Z-Test. Z-Test as Hypothesis Test. A test statistic is a random variable that we calculate from the sample data to determine whether to reject the null hypothesis.

  4. PDF Hypothesis Testing with z Tests

    Critical Values: Test statistic values beyond which we will reject the null hypothesis (cutoffs) p levels (α): Probabilities used to determine the critical value 5. Calculate test statistic (e.g., z statistic) 6. Make a decision Statistically Significant: Instructs us to reject the null hypothesis because the pattern in the data differs from

  5. Z-table (Right of Curve or Left)

    The Standard Normal model is used in hypothesis testing, including tests on proportions and on the difference between two means. The area under the whole of a normal distribution curve is 1, or 100 percent. The z-table helps by telling us what percentage is under the curve at any particular point. What is a Z Table: Standard Normal Probability

  6. Hypothesis Testing: Z-Scores

    Equation 1. Processing alpha for a two-tailed test. Since we have calculated the alpha value for a two-tailed test, then we can determine the critical values, that is, those values that determine the rejection zone in the standard normal distribution.. To find the critical values, we look at z-table the value of z that approximates an area under the curve similar to 0.0250.

  7. PDF The Z-test

    The Z-test January 9, 2021 Contents Example 1: (one tailed z-test) Example 2: (two tailed z-test) Questions Answers The z-test is a hypothesis test to determine if a single observed mean is signi cantly di erent (or greater or less than) the mean under the null hypothesis, hypwhen you ... From the z-table: z Area between mean and z Area beyond ...

  8. 10 Chapter 10: Hypothesis Testing with Z

    For α = .05, this means 2.5% of the area is in each tail, which, based on the z-table, corresponds to critical values of z* = ±1.96. This is shown in Figure 5. Figure 5: Two-tailed rejection region ... Conduct a hypothesis test using a z-score statistics, locating critical region, and make a statistical decision including.

  9. Z-test

    A Z-test is any statistical test for which the distribution of the test statistic under the null hypothesis can be approximated by a normal distribution.Z-test tests the mean of a distribution. For each significance level in the confidence interval, the Z-test has a single critical value (for example, 1.96 for 5% two tailed) which makes it more convenient than the Student's t-test whose ...

  10. Z Test: Definition & Two Proportion Z-Test

    The z-score associated with a 5% alpha level / 2 is 1.96.. Step 5: Compare the calculated z-score from Step 3 with the table z-score from Step 4.If the calculated z-score is larger, you can reject the null hypothesis. 8.99 > 1.96, so we can reject the null hypothesis.. Check out our YouTube channel for more stats help and tips!. References

  11. Data analysis: hypothesis testing: 5 Mean and z-score ranges

    In the previous section on hypothesis testing using the normal distribution, the z-score was frequently mentioned. This is because the z-score is the test statistic that is used to determine whether the null hypothesis should be accepted or rejected. ... As you continue through the course, you will see how the z-score table can be used in ...

  12. Hypothesis Testing: Two-tailed z test for mean

    This tutorial explains the basics of hypothesis testing. It also shows how to conduct a two-tailed hypothesis z-test for a population mean.Intro to hypothesi...

  13. PDF Hypothesis Testing with z Tests

    CHAPTER 7 Hypothesis Testing with z Tests 159 TABLE 7-1. Excerpt from the z Table The z table provides the percentage of scores between the mean and a given z value. The full table includes positive z statistics from 0.00 to 4.50. The negative zstatistics are not included because all we have to do is change the sign from positive to negative. Remember, the normal curve is symmetric: One side ...

  14. Critical Z-Values: Gatekeepers in Hypothesis Testing

    Using a significance level of 0.05 for a two-tailed test, the critical Z-values are -1.96 and +1.96. If the Z-score calculated from the sample data is -2.1 (indicating the shoes wore out faster than claimed), the null hypothesis would be rejected since -2.1 falls outside the critical values. Example 2: A beverage company claims its juice box ...

  15. Hypothesis Testing: Upper-, Lower, and Two Tailed Tests

    The decision rule is a statement that tells under what circumstances to reject the null hypothesis. The decision rule is based on specific values of the test statistic (e.g., reject H 0 if Z > 1.645). The decision rule for a specific test depends on 3 factors: the research or alternative hypothesis, the test statistic and the level of significance.

  16. Hypothesis Testing

    What is a Hypothesis Testing? Explained in simple terms with step by step examples. ... Step 5: Find the rejection region area (given by your alpha level above) from the z-table. An area of .05 is equal to a z-score of 1.645. Step 6: Find the test statistic using this formula: For this set of data: z= (112.5 - 100) / (15/√30) = 4.56 .

  17. Z-tests for Hypothesis testing: Formula & Examples

    Z-tests are statistical hypothesis testing techniques that are used to determine whether the null hypothesis relating to comparing sample means or proportions with that of population at a given significance level can be rejected or otherwise based on the z-statistics or z-score. As a data scientist, you must get a good understanding of the z-tests and its applications to test the hypothesis ...

  18. Z, t & P: When to use what?. Hypothesis testing allows you to check

    The Z-crit can be determined using a Z-Table. If the calculated Z-score is in the rejection region (outside the acceptance region) , we reject the null hypothesis and when it is in the acceptance ...

  19. How to Calculate a P-Value from a Z-Score by Hand

    The following examples show how to calculate a p-value from a z-score by hand using a z-table. Example 1: Find P-Value for a Left-Tailed Test. Suppose we conduct a left-tailed hypothesis test and get a z-score of -1.22. What is the p-value that corresponds to this z-score? To find the p-value, we can simply locate the value -1.22 in the z table:

  20. The A-Z of Hypothesis Testing

    The A-Z of Hypothesis Testing with some key concepts like Null Hypothesis, calculating the p-value, T and Z Statistic. ... (T-Stat or Z-Stat), we would be using a table similar to the log table i ...

  21. Z TABLE

    Using two Z tables makes life easier such that based on whether you want the know the area from the mean for a positive value or a negative value, you can use the respective Z score table. If you want to know the area between the mean and a negative value you will use the first table (1.1) shown above which is the left-hand/negative Z-table. ...

  22. Statsch 7

    CHAPTER 7: HYPOTHESIS TESTING WITH Z TESTS 7: THE Z TABLE. We can use the z table to look up the percentage of scores between the mean of the distribution and a given z statistic; Step 1: convert raw score into z score, step 2: look up a given z score on the z table to find the percentage of scores between the mean and that z score