Hypothesis Testing - Chi Squared Test

Lisa Sullivan, PhD

Professor of Biostatistics

Boston University School of Public Health

Introductory word scramble

Introduction

This module will continue the discussion of hypothesis testing, where a specific statement or hypothesis is generated about a population parameter, and sample statistics are used to assess the likelihood that the hypothesis is true. The hypothesis is based on available information and the investigator's belief about the population parameters. The specific tests considered here are called chi-square tests and are appropriate when the outcome is discrete (dichotomous, ordinal or categorical). For example, in some clinical trials the outcome is a classification such as hypertensive, pre-hypertensive or normotensive. We could use the same classification in an observational study such as the Framingham Heart Study to compare men and women in terms of their blood pressure status - again using the classification of hypertensive, pre-hypertensive or normotensive status.  

The technique to analyze a discrete outcome uses what is called a chi-square test. Specifically, the test statistic follows a chi-square probability distribution. We will consider chi-square tests here with one, two and more than two independent comparison groups.

Learning Objectives

After completing this module, the student will be able to:

  • Perform chi-square tests by hand
  • Appropriately interpret results of chi-square tests
  • Identify the appropriate hypothesis testing procedure based on type of outcome variable and number of samples

Tests with One Sample, Discrete Outcome

Here we consider hypothesis testing with a discrete outcome variable in a single population. Discrete variables are variables that take on more than two distinct responses or categories and the responses can be ordered or unordered (i.e., the outcome can be ordinal or categorical). The procedure we describe here can be used for dichotomous (exactly 2 response options), ordinal or categorical discrete outcomes and the objective is to compare the distribution of responses, or the proportions of participants in each response category, to a known distribution. The known distribution is derived from another study or report and it is again important in setting up the hypotheses that the comparator distribution specified in the null hypothesis is a fair comparison. The comparator is sometimes called an external or a historical control.   

In one sample tests for a discrete outcome, we set up our hypotheses against an appropriate comparator. We select a sample and compute descriptive statistics on the sample data. Specifically, we compute the sample size (n) and the proportions of participants in each response

Test Statistic for Testing H 0 : p 1 = p 10 , p 2 = p 20 , ..., p k = p k0

We find the critical value in a table of probabilities for the chi-square distribution with degrees of freedom (df) = k-1. In the test statistic, O = observed frequency and E=expected frequency in each of the response categories. The observed frequencies are those observed in the sample and the expected frequencies are computed as described below. χ 2 (chi-square) is another probability distribution and ranges from 0 to ∞. The test above statistic formula above is appropriate for large samples, defined as expected frequencies of at least 5 in each of the response categories.  

When we conduct a χ 2 test, we compare the observed frequencies in each response category to the frequencies we would expect if the null hypothesis were true. These expected frequencies are determined by allocating the sample to the response categories according to the distribution specified in H 0 . This is done by multiplying the observed sample size (n) by the proportions specified in the null hypothesis (p 10 , p 20 , ..., p k0 ). To ensure that the sample size is appropriate for the use of the test statistic above, we need to ensure that the following: min(np 10 , n p 20 , ..., n p k0 ) > 5.  

The test of hypothesis with a discrete outcome measured in a single sample, where the goal is to assess whether the distribution of responses follows a known distribution, is called the χ 2 goodness-of-fit test. As the name indicates, the idea is to assess whether the pattern or distribution of responses in the sample "fits" a specified population (external or historical) distribution. In the next example we illustrate the test. As we work through the example, we provide additional details related to the use of this new test statistic.  

A University conducted a survey of its recent graduates to collect demographic and health information for future planning purposes as well as to assess students' satisfaction with their undergraduate experiences. The survey revealed that a substantial proportion of students were not engaging in regular exercise, many felt their nutrition was poor and a substantial number were smoking. In response to a question on regular exercise, 60% of all graduates reported getting no regular exercise, 25% reported exercising sporadically and 15% reported exercising regularly as undergraduates. The next year the University launched a health promotion campaign on campus in an attempt to increase health behaviors among undergraduates. The program included modules on exercise, nutrition and smoking cessation. To evaluate the impact of the program, the University again surveyed graduates and asked the same questions. The survey was completed by 470 graduates and the following data were collected on the exercise question:

 

Number of Students

255

125

90

470

Based on the data, is there evidence of a shift in the distribution of responses to the exercise question following the implementation of the health promotion campaign on campus? Run the test at a 5% level of significance.

In this example, we have one sample and a discrete (ordinal) outcome variable (with three response options). We specifically want to compare the distribution of responses in the sample to the distribution reported the previous year (i.e., 60%, 25%, 15% reporting no, sporadic and regular exercise, respectively). We now run the test using the five-step approach.  

  • Step 1. Set up hypotheses and determine level of significance.

The null hypothesis again represents the "no change" or "no difference" situation. If the health promotion campaign has no impact then we expect the distribution of responses to the exercise question to be the same as that measured prior to the implementation of the program.

H 0 : p 1 =0.60, p 2 =0.25, p 3 =0.15,  or equivalently H 0 : Distribution of responses is 0.60, 0.25, 0.15  

H 1 :   H 0 is false.          α =0.05

Notice that the research hypothesis is written in words rather than in symbols. The research hypothesis as stated captures any difference in the distribution of responses from that specified in the null hypothesis. We do not specify a specific alternative distribution, instead we are testing whether the sample data "fit" the distribution in H 0 or not. With the χ 2 goodness-of-fit test there is no upper or lower tailed version of the test.

  • Step 2. Select the appropriate test statistic.  

The test statistic is:

We must first assess whether the sample size is adequate. Specifically, we need to check min(np 0 , np 1, ..., n p k ) > 5. The sample size here is n=470 and the proportions specified in the null hypothesis are 0.60, 0.25 and 0.15. Thus, min( 470(0.65), 470(0.25), 470(0.15))=min(282, 117.5, 70.5)=70.5. The sample size is more than adequate so the formula can be used.

  • Step 3. Set up decision rule.  

The decision rule for the χ 2 test depends on the level of significance and the degrees of freedom, defined as degrees of freedom (df) = k-1 (where k is the number of response categories). If the null hypothesis is true, the observed and expected frequencies will be close in value and the χ 2 statistic will be close to zero. If the null hypothesis is false, then the χ 2 statistic will be large. Critical values can be found in a table of probabilities for the χ 2 distribution. Here we have df=k-1=3-1=2 and a 5% level of significance. The appropriate critical value is 5.99, and the decision rule is as follows: Reject H 0 if χ 2 > 5.99.

  • Step 4. Compute the test statistic.  

We now compute the expected frequencies using the sample size and the proportions specified in the null hypothesis. We then substitute the sample data (observed frequencies) and the expected frequencies into the formula for the test statistic identified in Step 2. The computations can be organized as follows.

   

255

125

90

470

470(0.60)

=282

470(0.25)

=117.5

470(0.15)

=70.5

470

Notice that the expected frequencies are taken to one decimal place and that the sum of the observed frequencies is equal to the sum of the expected frequencies. The test statistic is computed as follows:

  • Step 5. Conclusion.  

We reject H 0 because 8.46 > 5.99. We have statistically significant evidence at α=0.05 to show that H 0 is false, or that the distribution of responses is not 0.60, 0.25, 0.15.  The p-value is p < 0.005.  

In the χ 2 goodness-of-fit test, we conclude that either the distribution specified in H 0 is false (when we reject H 0 ) or that we do not have sufficient evidence to show that the distribution specified in H 0 is false (when we fail to reject H 0 ). Here, we reject H 0 and concluded that the distribution of responses to the exercise question following the implementation of the health promotion campaign was not the same as the distribution prior. The test itself does not provide details of how the distribution has shifted. A comparison of the observed and expected frequencies will provide some insight into the shift (when the null hypothesis is rejected). Does it appear that the health promotion campaign was effective?  

Consider the following: 

 

255

125

90

470

282

117.5

70.5

470

If the null hypothesis were true (i.e., no change from the prior year) we would have expected more students to fall in the "No Regular Exercise" category and fewer in the "Regular Exercise" categories. In the sample, 255/470 = 54% reported no regular exercise and 90/470=19% reported regular exercise. Thus, there is a shift toward more regular exercise following the implementation of the health promotion campaign. There is evidence of a statistical difference, is this a meaningful difference? Is there room for improvement?

The National Center for Health Statistics (NCHS) provided data on the distribution of weight (in categories) among Americans in 2002. The distribution was based on specific values of body mass index (BMI) computed as weight in kilograms over height in meters squared. Underweight was defined as BMI< 18.5, Normal weight as BMI between 18.5 and 24.9, overweight as BMI between 25 and 29.9 and obese as BMI of 30 or greater. Americans in 2002 were distributed as follows: 2% Underweight, 39% Normal Weight, 36% Overweight, and 23% Obese. Suppose we want to assess whether the distribution of BMI is different in the Framingham Offspring sample. Using data from the n=3,326 participants who attended the seventh examination of the Offspring in the Framingham Heart Study we created the BMI categories as defined and observed the following:

 

30

20

932

1374

1000

3326

  • Step 1.  Set up hypotheses and determine level of significance.

H 0 : p 1 =0.02, p 2 =0.39, p 3 =0.36, p 4 =0.23     or equivalently

H 0 : Distribution of responses is 0.02, 0.39, 0.36, 0.23

H 1 :   H 0 is false.        α=0.05

The formula for the test statistic is:

We must assess whether the sample size is adequate. Specifically, we need to check min(np 0 , np 1, ..., n p k ) > 5. The sample size here is n=3,326 and the proportions specified in the null hypothesis are 0.02, 0.39, 0.36 and 0.23. Thus, min( 3326(0.02), 3326(0.39), 3326(0.36), 3326(0.23))=min(66.5, 1297.1, 1197.4, 765.0)=66.5. The sample size is more than adequate, so the formula can be used.

Here we have df=k-1=4-1=3 and a 5% level of significance. The appropriate critical value is 7.81 and the decision rule is as follows: Reject H 0 if χ 2 > 7.81.

We now compute the expected frequencies using the sample size and the proportions specified in the null hypothesis. We then substitute the sample data (observed frequencies) into the formula for the test statistic identified in Step 2. We organize the computations in the following table.

 

30

20

932

1374

1000

3326

66.5

1297.1

1197.4

765.0

3326

The test statistic is computed as follows:

We reject H 0 because 233.53 > 7.81. We have statistically significant evidence at α=0.05 to show that H 0 is false or that the distribution of BMI in Framingham is different from the national data reported in 2002, p < 0.005.  

Again, the χ 2   goodness-of-fit test allows us to assess whether the distribution of responses "fits" a specified distribution. Here we show that the distribution of BMI in the Framingham Offspring Study is different from the national distribution. To understand the nature of the difference we can compare observed and expected frequencies or observed and expected proportions (or percentages). The frequencies are large because of the large sample size, the observed percentages of patients in the Framingham sample are as follows: 0.6% underweight, 28% normal weight, 41% overweight and 30% obese. In the Framingham Offspring sample there are higher percentages of overweight and obese persons (41% and 30% in Framingham as compared to 36% and 23% in the national data), and lower proportions of underweight and normal weight persons (0.6% and 28% in Framingham as compared to 2% and 39% in the national data). Are these meaningful differences?

In the module on hypothesis testing for means and proportions, we discussed hypothesis testing applications with a dichotomous outcome variable in a single population. We presented a test using a test statistic Z to test whether an observed (sample) proportion differed significantly from a historical or external comparator. The chi-square goodness-of-fit test can also be used with a dichotomous outcome and the results are mathematically equivalent.  

In the prior module, we considered the following example. Here we show the equivalence to the chi-square goodness-of-fit test.

The NCHS report indicated that in 2002, 75% of children aged 2 to 17 saw a dentist in the past year. An investigator wants to assess whether use of dental services is similar in children living in the city of Boston. A sample of 125 children aged 2 to 17 living in Boston are surveyed and 64 reported seeing a dentist over the past 12 months. Is there a significant difference in use of dental services between children living in Boston and the national data?

We presented the following approach to the test using a Z statistic. 

  • Step 1. Set up hypotheses and determine level of significance

H 0 : p = 0.75

H 1 : p ≠ 0.75                               α=0.05

We must first check that the sample size is adequate. Specifically, we need to check min(np 0 , n(1-p 0 )) = min( 125(0.75), 125(1-0.75))=min(94, 31)=31. The sample size is more than adequate so the following formula can be used

This is a two-tailed test, using a Z statistic and a 5% level of significance. Reject H 0 if Z < -1.960 or if Z > 1.960.

We now substitute the sample data into the formula for the test statistic identified in Step 2. The sample proportion is:

null and alternative hypothesis for chi square test

We reject H 0 because -6.15 < -1.960. We have statistically significant evidence at a =0.05 to show that there is a statistically significant difference in the use of dental service by children living in Boston as compared to the national data. (p < 0.0001).  

We now conduct the same test using the chi-square goodness-of-fit test. First, we summarize our sample data as follows:

 

Saw a Dentist

in Past 12 Months

Did Not See a Dentist

in Past 12 Months

Total

# of Participants

64

61

125

H 0 : p 1 =0.75, p 2 =0.25     or equivalently H 0 : Distribution of responses is 0.75, 0.25 

We must assess whether the sample size is adequate. Specifically, we need to check min(np 0 , np 1, ...,np k >) > 5. The sample size here is n=125 and the proportions specified in the null hypothesis are 0.75, 0.25. Thus, min( 125(0.75), 125(0.25))=min(93.75, 31.25)=31.25. The sample size is more than adequate so the formula can be used.

Here we have df=k-1=2-1=1 and a 5% level of significance. The appropriate critical value is 3.84, and the decision rule is as follows: Reject H 0 if χ 2 > 3.84. (Note that 1.96 2 = 3.84, where 1.96 was the critical value used in the Z test for proportions shown above.)

 

64

61

125

93.75

31.25

125

(Note that (-6.15) 2 = 37.8, where -6.15 was the value of the Z statistic in the test for proportions shown above.)

We reject H 0 because 37.8 > 3.84. We have statistically significant evidence at α=0.05 to show that there is a statistically significant difference in the use of dental service by children living in Boston as compared to the national data.  (p < 0.0001). This is the same conclusion we reached when we conducted the test using the Z test above. With a dichotomous outcome, Z 2 = χ 2 !   In statistics, there are often several approaches that can be used to test hypotheses. 

Tests for Two or More Independent Samples, Discrete Outcome

Here we extend that application of the chi-square test to the case with two or more independent comparison groups. Specifically, the outcome of interest is discrete with two or more responses and the responses can be ordered or unordered (i.e., the outcome can be dichotomous, ordinal or categorical). We now consider the situation where there are two or more independent comparison groups and the goal of the analysis is to compare the distribution of responses to the discrete outcome variable among several independent comparison groups.  

The test is called the χ 2 test of independence and the null hypothesis is that there is no difference in the distribution of responses to the outcome across comparison groups. This is often stated as follows: The outcome variable and the grouping variable (e.g., the comparison treatments or comparison groups) are independent (hence the name of the test). Independence here implies homogeneity in the distribution of the outcome among comparison groups.    

The null hypothesis in the χ 2 test of independence is often stated in words as: H 0 : The distribution of the outcome is independent of the groups. The alternative or research hypothesis is that there is a difference in the distribution of responses to the outcome variable among the comparison groups (i.e., that the distribution of responses "depends" on the group). In order to test the hypothesis, we measure the discrete outcome variable in each participant in each comparison group. The data of interest are the observed frequencies (or number of participants in each response category in each group). The formula for the test statistic for the χ 2 test of independence is given below.

Test Statistic for Testing H 0 : Distribution of outcome is independent of groups

and we find the critical value in a table of probabilities for the chi-square distribution with df=(r-1)*(c-1).

Here O = observed frequency, E=expected frequency in each of the response categories in each group, r = the number of rows in the two-way table and c = the number of columns in the two-way table.   r and c correspond to the number of comparison groups and the number of response options in the outcome (see below for more details). The observed frequencies are the sample data and the expected frequencies are computed as described below. The test statistic is appropriate for large samples, defined as expected frequencies of at least 5 in each of the response categories in each group.  

The data for the χ 2 test of independence are organized in a two-way table. The outcome and grouping variable are shown in the rows and columns of the table. The sample table below illustrates the data layout. The table entries (blank below) are the numbers of participants in each group responding to each response category of the outcome variable.

Table - Possible outcomes are are listed in the columns; The groups being compared are listed in rows.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

N

In the table above, the grouping variable is shown in the rows of the table; r denotes the number of independent groups. The outcome variable is shown in the columns of the table; c denotes the number of response options in the outcome variable. Each combination of a row (group) and column (response) is called a cell of the table. The table has r*c cells and is sometimes called an r x c ("r by c") table. For example, if there are 4 groups and 5 categories in the outcome variable, the data are organized in a 4 X 5 table. The row and column totals are shown along the right-hand margin and the bottom of the table, respectively. The total sample size, N, can be computed by summing the row totals or the column totals. Similar to ANOVA, N does not refer to a population size here but rather to the total sample size in the analysis. The sample data can be organized into a table like the above. The numbers of participants within each group who select each response option are shown in the cells of the table and these are the observed frequencies used in the test statistic.

The test statistic for the χ 2 test of independence involves comparing observed (sample data) and expected frequencies in each cell of the table. The expected frequencies are computed assuming that the null hypothesis is true. The null hypothesis states that the two variables (the grouping variable and the outcome) are independent. The definition of independence is as follows:

 Two events, A and B, are independent if P(A|B) = P(A), or equivalently, if P(A and B) = P(A) P(B).

The second statement indicates that if two events, A and B, are independent then the probability of their intersection can be computed by multiplying the probability of each individual event. To conduct the χ 2 test of independence, we need to compute expected frequencies in each cell of the table. Expected frequencies are computed by assuming that the grouping variable and outcome are independent (i.e., under the null hypothesis). Thus, if the null hypothesis is true, using the definition of independence:

P(Group 1 and Response Option 1) = P(Group 1) P(Response Option 1).

 The above states that the probability that an individual is in Group 1 and their outcome is Response Option 1 is computed by multiplying the probability that person is in Group 1 by the probability that a person is in Response Option 1. To conduct the χ 2 test of independence, we need expected frequencies and not expected probabilities . To convert the above probability to a frequency, we multiply by N. Consider the following small example.

 

10

8

7

25

22

15

13

50

30

28

17

75

62

51

37

150

The data shown above are measured in a sample of size N=150. The frequencies in the cells of the table are the observed frequencies. If Group and Response are independent, then we can compute the probability that a person in the sample is in Group 1 and Response category 1 using:

P(Group 1 and Response 1) = P(Group 1) P(Response 1),

P(Group 1 and Response 1) = (25/150) (62/150) = 0.069.

Thus if Group and Response are independent we would expect 6.9% of the sample to be in the top left cell of the table (Group 1 and Response 1). The expected frequency is 150(0.069) = 10.4.   We could do the same for Group 2 and Response 1:

P(Group 2 and Response 1) = P(Group 2) P(Response 1),

P(Group 2 and Response 1) = (50/150) (62/150) = 0.138.

The expected frequency in Group 2 and Response 1 is 150(0.138) = 20.7.

Thus, the formula for determining the expected cell frequencies in the χ 2 test of independence is as follows:

Expected Cell Frequency = (Row Total * Column Total)/N.

The above computes the expected frequency in one step rather than computing the expected probability first and then converting to a frequency.  

In a prior example we evaluated data from a survey of university graduates which assessed, among other things, how frequently they exercised. The survey was completed by 470 graduates. In the prior example we used the χ 2 goodness-of-fit test to assess whether there was a shift in the distribution of responses to the exercise question following the implementation of a health promotion campaign on campus. We specifically considered one sample (all students) and compared the observed distribution to the distribution of responses the prior year (a historical control). Suppose we now wish to assess whether there is a relationship between exercise on campus and students' living arrangements. As part of the same survey, graduates were asked where they lived their senior year. The response options were dormitory, on-campus apartment, off-campus apartment, and at home (i.e., commuted to and from the university). The data are shown below.

 

32

30

28

90

74

64

42

180

110

25

15

150

39

6

5

50

255

125

90

470

Based on the data, is there a relationship between exercise and student's living arrangement? Do you think where a person lives affect their exercise status? Here we have four independent comparison groups (living arrangement) and a discrete (ordinal) outcome variable with three response options. We specifically want to test whether living arrangement and exercise are independent. We will run the test using the five-step approach.  

H 0 : Living arrangement and exercise are independent

H 1 : H 0 is false.                α=0.05

The null and research hypotheses are written in words rather than in symbols. The research hypothesis is that the grouping variable (living arrangement) and the outcome variable (exercise) are dependent or related.   

  • Step 2.  Select the appropriate test statistic.  

The condition for appropriate use of the above test statistic is that each expected frequency is at least 5. In Step 4 we will compute the expected frequencies and we will ensure that the condition is met.

The decision rule depends on the level of significance and the degrees of freedom, defined as df = (r-1)(c-1), where r and c are the numbers of rows and columns in the two-way data table.   The row variable is the living arrangement and there are 4 arrangements considered, thus r=4. The column variable is exercise and 3 responses are considered, thus c=3. For this test, df=(4-1)(3-1)=3(2)=6. Again, with χ 2 tests there are no upper, lower or two-tailed tests. If the null hypothesis is true, the observed and expected frequencies will be close in value and the χ 2 statistic will be close to zero. If the null hypothesis is false, then the χ 2 statistic will be large. The rejection region for the χ 2 test of independence is always in the upper (right-hand) tail of the distribution. For df=6 and a 5% level of significance, the appropriate critical value is 12.59 and the decision rule is as follows: Reject H 0 if c 2 > 12.59.

We now compute the expected frequencies using the formula,

Expected Frequency = (Row Total * Column Total)/N.

The computations can be organized in a two-way table. The top number in each cell of the table is the observed frequency and the bottom number is the expected frequency.   The expected frequencies are shown in parentheses.

 

32

(48.8)

30

(23.9)

28

(17.2)

90

74

(97.7)

64

(47.9)

42

(34.5)

180

110

(81.4)

25

(39.9)

15

(28.7)

150

39

(27.1)

6

(13.3)

5

(9.6)

50

255

125

90

470

Notice that the expected frequencies are taken to one decimal place and that the sums of the observed frequencies are equal to the sums of the expected frequencies in each row and column of the table.  

Recall in Step 2 a condition for the appropriate use of the test statistic was that each expected frequency is at least 5. This is true for this sample (the smallest expected frequency is 9.6) and therefore it is appropriate to use the test statistic.

We reject H 0 because 60.5 > 12.59. We have statistically significant evidence at a =0.05 to show that H 0 is false or that living arrangement and exercise are not independent (i.e., they are dependent or related), p < 0.005.  

Again, the χ 2 test of independence is used to test whether the distribution of the outcome variable is similar across the comparison groups. Here we rejected H 0 and concluded that the distribution of exercise is not independent of living arrangement, or that there is a relationship between living arrangement and exercise. The test provides an overall assessment of statistical significance. When the null hypothesis is rejected, it is important to review the sample data to understand the nature of the relationship. Consider again the sample data. 

Because there are different numbers of students in each living situation, it makes the comparisons of exercise patterns difficult on the basis of the frequencies alone. The following table displays the percentages of students in each exercise category by living arrangement. The percentages sum to 100% in each row of the table. For comparison purposes, percentages are also shown for the total sample along the bottom row of the table.

36%

33%

31%

41%

36%

23%

73%

17%

10%

78%

12%

10%

54%

27%

19%

From the above, it is clear that higher percentages of students living in dormitories and in on-campus apartments reported regular exercise (31% and 23%) as compared to students living in off-campus apartments and at home (10% each).  

Test Yourself

 Pancreaticoduodenectomy (PD) is a procedure that is associated with considerable morbidity. A study was recently conducted on 553 patients who had a successful PD between January 2000 and December 2010 to determine whether their Surgical Apgar Score (SAS) is related to 30-day perioperative morbidity and mortality. The table below gives the number of patients experiencing no, minor, or major morbidity by SAS category.  

0-4

21

20

16

5-6

135

71

35

7-10

158

62

35

Question: What would be an appropriate statistical test to examine whether there is an association between Surgical Apgar Score and patient outcome? Using 14.13 as the value of the test statistic for these data, carry out the appropriate test at a 5% level of significance. Show all parts of your test.

In the module on hypothesis testing for means and proportions, we discussed hypothesis testing applications with a dichotomous outcome variable and two independent comparison groups. We presented a test using a test statistic Z to test for equality of independent proportions. The chi-square test of independence can also be used with a dichotomous outcome and the results are mathematically equivalent.  

In the prior module, we considered the following example. Here we show the equivalence to the chi-square test of independence.

A randomized trial is designed to evaluate the effectiveness of a newly developed pain reliever designed to reduce pain in patients following joint replacement surgery. The trial compares the new pain reliever to the pain reliever currently in use (called the standard of care). A total of 100 patients undergoing joint replacement surgery agreed to participate in the trial. Patients were randomly assigned to receive either the new pain reliever or the standard pain reliever following surgery and were blind to the treatment assignment. Before receiving the assigned treatment, patients were asked to rate their pain on a scale of 0-10 with higher scores indicative of more pain. Each patient was then given the assigned treatment and after 30 minutes was again asked to rate their pain on the same scale. The primary outcome was a reduction in pain of 3 or more scale points (defined by clinicians as a clinically meaningful reduction). The following data were observed in the trial.

50

23

0.46

50

11

0.22

We tested whether there was a significant difference in the proportions of patients reporting a meaningful reduction (i.e., a reduction of 3 or more scale points) using a Z statistic, as follows. 

H 0 : p 1 = p 2    

H 1 : p 1 ≠ p 2                             α=0.05

Here the new or experimental pain reliever is group 1 and the standard pain reliever is group 2.

We must first check that the sample size is adequate. Specifically, we need to ensure that we have at least 5 successes and 5 failures in each comparison group or that:

In this example, we have

Therefore, the sample size is adequate, so the following formula can be used:

Reject H 0 if Z < -1.960 or if Z > 1.960.

We now substitute the sample data into the formula for the test statistic identified in Step 2. We first compute the overall proportion of successes:

We now substitute to compute the test statistic.

  • Step 5.  Conclusion.  

We now conduct the same test using the chi-square test of independence.  

H 0 : Treatment and outcome (meaningful reduction in pain) are independent

H 1 :   H 0 is false.         α=0.05

The formula for the test statistic is:  

For this test, df=(2-1)(2-1)=1. At a 5% level of significance, the appropriate critical value is 3.84 and the decision rule is as follows: Reject H0 if χ 2 > 3.84. (Note that 1.96 2 = 3.84, where 1.96 was the critical value used in the Z test for proportions shown above.)

We now compute the expected frequencies using:

The computations can be organized in a two-way table. The top number in each cell of the table is the observed frequency and the bottom number is the expected frequency. The expected frequencies are shown in parentheses.

23

(17.0)

27

(33.0)

50

11

(17.0)

39

(33.0)

50

34

66

100

A condition for the appropriate use of the test statistic was that each expected frequency is at least 5. This is true for this sample (the smallest expected frequency is 22.0) and therefore it is appropriate to use the test statistic.

(Note that (2.53) 2 = 6.4, where 2.53 was the value of the Z statistic in the test for proportions shown above.)

Chi-Squared Tests in R

The video below by Mike Marin demonstrates how to perform chi-squared tests in the R programming language.

Answer to Problem on Pancreaticoduodenectomy and Surgical Apgar Scores

We have 3 independent comparison groups (Surgical Apgar Score) and a categorical outcome variable (morbidity/mortality). We can run a Chi-Squared test of independence.

H 0 : Apgar scores and patient outcome are independent of one another.

H A : Apgar scores and patient outcome are not independent.

Chi-squared = 14.3

Since 14.3 is greater than 9.49, we reject H 0.

There is an association between Apgar scores and patient outcome. The lowest Apgar score group (0 to 4) experienced the highest percentage of major morbidity or mortality (16 out of 57=28%) compared to the other Apgar score groups.

  • Skip to secondary menu
  • Skip to main content
  • Skip to primary sidebar

Statistics By Jim

Making statistics intuitive

Chi-Square Test of Independence and an Example

By Jim Frost 88 Comments

The Chi-square test of independence determines whether there is a statistically significant relationship between categorical variables . It is a hypothesis test that answers the question—do the values of one categorical variable depend on the value of other categorical variables? This test is also known as the chi-square test of association.

Star Trek meme that shows doomed red-shirts.

In this post, I’ll show you how the Chi-square test of independence works. Then, I’ll show you how to perform the analysis and interpret the results by working through the example. I’ll use this test to determine whether wearing the dreaded red shirt in Star Trek is the kiss of death!

If you need a primer on the basics, read my hypothesis testing overview .

Overview of the Chi-Square Test of Independence

The Chi-square test of association evaluates relationships between categorical variables. Like any statistical hypothesis test , the Chi-square test has both a null hypothesis and an alternative hypothesis.

  • Null hypothesis: There are no relationships between the categorical variables. If you know the value of one variable, it does not help you predict the value of another variable.
  • Alternative hypothesis: There are relationships between the categorical variables. Knowing the value of one variable does help you predict the value of another variable.

The Chi-square test of association works by comparing the distribution that you observe to the distribution that you expect if there is no relationship between the categorical variables. In the Chi-square context, the word “expected” is equivalent to what you’d expect if the null hypothesis is true. If your observed distribution is sufficiently different than the expected distribution (no relationship), you can reject the null hypothesis and infer that the variables are related.

For a Chi-square test, a p-value that is less than or equal to your significance level indicates there is sufficient evidence to conclude that the observed distribution is not the same as the expected distribution. You can conclude that a relationship exists between the categorical variables.

When you have smaller sample sizes, you might need to use Fisher’s exact test instead of the chi-square version. To learn more, read my post, Fisher’s Exact Test: Using and Interpreting .

Star Trek Fatalities by Uniform Colors

We’ll perform a Chi-square test of independence to determine whether there is a statistically significant association between shirt color and deaths. We need to use this test because these variables are both categorical variables. Shirt color can be only blue, gold, or red. Fatalities can be only dead or alive.

The color of the uniform represents each crewmember’s work area. We will statistically assess whether there is a connection between uniform color and the fatality rate. Believe it or not, there are “real” data about the crew from authoritative sources and the show portrayed the deaths onscreen. The table below shows how many crewmembers are in each area and how many have died.

Blue Science and Medical 136 7
Gold Command and Helm 55 9
Red Operations, Engineering, and Security 239 24
Ship’s total All 430 40

Tip: Because the chi-square test of association assesses the relationship between categorical variables, bar charts are a great way to graph the data. Use clustering or stacking to compare subgroups within the categories.

Bar chart that displays the fatality rates on Star Trek by uniform color.

Related post : Bar Charts: Using, Examples, and Interpreting

Performing the Chi-Square Test of Independence for Uniform Color and Fatalities

For our example, we will determine whether the observed counts of deaths by uniform color are different from the distribution that we’d expect if there is no association between the two variables.

The table below shows how I’ve entered the data into the worksheet. You can also download the CSV dataset for StarTrekFatalities .

Blue Dead 7
Blue Alive 129
Gold Dead 9
Gold Alive 46
Red Dead 24
Red Alive 215

You can use the dataset to perform the analysis in your preferred statistical software. The Chi-squared test of independence results are below. As an aside, I use this example in my post about degrees of freedom in statistics . Learn why there are two degrees of freedom for the table below.

In our statistical results, both p-values are less than 0.05. We can reject the null hypothesis and conclude there is a relationship between shirt color and deaths. The next step is to define that relationship.

Describing the relationship between categorical variables involves comparing the observed count to the expected count in each cell of the Dead column. I’ve annotated this comparison in the statistical output above.

Statisticians refer to this type of table as a contingency table. To learn more about them and how to use them to calculate probabilities, read my post Using Contingency Tables to Calculate Probabilities .

Related post : Chi-Square Table

Graphical Results for the Chi-Square Test of Association

Additionally, you can use bar charts to graph each cell’s contribution to the Chi-square statistic, which is below.

Surprise! It’s the blue and gold uniforms that contribute the most to the Chi-square statistic and produce the statistical significance! Red shirts add almost nothing. In the statistical output, the comparison of observed counts to expected counts shows that blue shirts die less frequently than expected, gold shirts die more often than expected, and red shirts die at the expected rate.

The graph below reiterates these conclusions by displaying fatality percentages by uniform color along with the overall death rate.

The Chi-square test indicates that red shirts don’t die more frequently than expected. Hold on. There’s more to this story!

Time for a bonus lesson and a bonus analysis in this blog post!

2 Proportions test to compare Security Red-Shirts to Non-Security Red-Shirts

The bonus lesson is that it is vital to include the genuinely pertinent variables in the analysis. Perhaps the color of the shirt is not the critical variable but rather the crewmember’s work area. Crewmembers in Security, Engineering, and Operations all wear red shirts. Maybe only security guards have a higher death rate?

We can test this theory using the 2 Proportions test. We’ll compare the fatality rates of red-shirts in security to red-shirts who are not in security.

The summary data are below. In the table, the events represent the counts of deaths, while the trials are the number of personnel.

Events Trials
Security 18 90
Not security 6 149

The p-value of 0.000 signifies that the difference between the two proportions is statistically significant. Security has a mortality rate of 20% while the other red-shirts are only at 4%.

Security officers have the highest mortality rate on the ship, closely followed by the gold-shirts. Red-shirts that are not in security have a fatality rate similar to the blue-shirts.

As it turns out, it’s not the color of the shirt that affects fatality rates; it’s the duty area. That makes more sense.

Risk by Work Area Summary

The Chi-square test of independence and the 2 Proportions test both indicate that the death rate varies by work area on the U.S.S. Enterprise. Doctors, scientists, engineers, and those in ship operations are the safest with about a 5% fatality rate. Crewmembers that are in command or security have death rates that exceed 15%!

Share this:

null and alternative hypothesis for chi square test

Reader Interactions

' src=

July 18, 2024 at 10:27 am

I read this chi-squared example your excellent book on hypothesis testing but there a couple of things that I can’t quite reconcile:

You decribed the proportion of observed Red shirts fatalities as being the same as the expected. Relative to the Blue and Gold comparison the Red Shirt fatality rate (24) is much closer to the expected rate (22.23) but it isn’t exactely the same.

How different would it need to be to conclude that it is different as opposed to just being the least important in a context where we have concluded that there is an association between shirt colour and fatality rates? Would this need to be answered by a series of chi squared tests (or 2 proportion test) that considered the combination of one shirt colour compared with the sum of other shirt colours. I have tried this with the following p values resultinh from the chi-squared test in excel.

Red shirt v non-red shirt pvalue = 0.554809933

Blue shirt v non-blue shirt pvalue = 0.04363293

Gold shirt v non-gold shirt = 0.053533022

This would suggest that if the question was “do gold shirts die more frequently than other colours” the answer would be that the data does not rule out the null hypothesis. For blue shirts this test would suggest that the data can rule out the null hypothesis yet in the full three colour test Gold contributed the most to the chi-squared statistic.

I have a similar example from my work which looks at the proportions of customers using two different websites and considers the proportions which are new customers, existing customers and customers returned after a long gap (reactivated).

For that test the chi-squared test p value was sufficient to rule out the null hypothesis with reactivated customers contributing the most to the chi-squared statistic. But no individual test which set each customer group against the sum of the others would be considered significant.

Is this comparison gold shirt versus non-shirt or reactivated customers v other customers not a valid use of this test?

' src=

February 6, 2024 at 9:55 pm

Hi Jim. I am using R to calclate a chi sqaure of independence. I have an value of 1.486444 with a P value greater than 0.05. My question is how do I interpet the value of 1.48644? Is this a strong association between two variables or a weak association?

' src=

February 6, 2024 at 10:19 pm

You really just look at the p-value. If you assess the chi-square value, you need as the chi-square value in conjunction with a chi-square distribution with the correct degrees of freedom and use that to calculate the probability. But the p-value does that for you!

In your case, the p-value is greater than your significance level. So, you fail to reject the null hypothesis. You have insufficient evidence to conclude that an association exists between your variables.

Also, it’s important to note that this test doesn’t indicate the strength of association. It only tells you whether your sample data provide sufficient evidence to conclude that an association exists in the population. Unfortunately, you can’t conclude that an association exists.

' src=

September 1, 2022 at 5:01 am

Thank you this was such a helpful article.

I’m not sure if you check these comments anymore, yet if you do I did have a quick question for you. I was trying to follow along in SPSS to reproduce your example and I managed to do most of it. I put your data in, used Weight Cases by Frequency of Deaths, and then was able to do the Chi Square analysis that achieved the exact same results as yours.

Unfortunately, I am totally stuck on the next part where you do the 2 graphs, especially the Percentage of Fatalities by Shirt Color. The math makes sense – it’s just e.g., Gold deaths / (Gold deaths + Gold Alive). However, I cannot seem to figure out how to create a bar chart like that in SPSS!? I’ve tried every combination of variables and settings I can think of in the Chart Builder and no luck. I’ve also tried the Compute Variable option with various formulas to create a new column with the death percentages by shirt color but can’t find a way to sum the frequencies.. The best I can get is using an IF statement so it only calculates on the rows with a Death statistic and then I can get the first part: Frequency / ???, but can’t sum the 2 frequencies of Deaths & Alive per shirt colour to calculate the figure properly. And I’m not sure what other things I can try.

So basically I’m totally stuck at the moment. If by some chance you see this, is there any chance you might please be able to help me figure out how to do that Percentage of Fatalities by Shirt Color bar graph in SPSS? The only way I can see at the moment is to manually type the calculated figures into a new dataset and graph it. That would work but doesn’t seem a very practical way of doing things if this was a large dataset instead of a small example one. Hence I’m assuming this must be a better way of doing this?

Thank you in advance for any help you can give me.

September 1, 2022 at 3:38 pm

Yes, I definitely check these comments!

Unfortunately, I don’t have much experience using SPSS, so I’ll be of limited help with that. There must be some way to do that in SPSS though. Worst case scenario, calculate the percentages by hand or in Excel and then enter them into SPSS and graph them. That shouldn’t be necessary but would work in a pinch.

Perhaps someone with more SPSS experience can provide some tips?

' src=

September 18, 2021 at 6:09 pm

Hi. This comment relates to Warren’s post. The null hypothesis is that there is no statistically significant relationship between “Uniform color” and “Status”. During the summing used to calculate the Chi-squared statistic, each of the (6) contributions are included. (3 Uniform colors x 2 status possibilities) The “Alive” column gives the small contributions that bring the total contribution from 5.6129 up to 6.189. Any reasoning specific to the “Dead” column only begins after the 2-dimensional Chi-squared calculation has been completed.

September 19, 2021 at 12:38 am

Hi Bill, thanks for your clarifications. I got confused with whom you were replying!

September 17, 2021 at 5:53 pm

The chi-square formula is: χ2 = ∑(Oi – Ei)2/Ei, where Oi = observed value (actual value) and Ei = expected value.

September 17, 2021 at 5:56 pm

Hi Bill, thanks. I do cover the formula and example calculations in my other post on the topic, How Chi-Squared Works .

' src=

September 16, 2021 at 6:24 pm

Why is the Pearson Chi Square statistic not equal to the sum of the contributions to Chi-Square? I get 5.6129. The p-value for that Chi-Squre statistic is .0604 which is NOT significant in this century OR the 24th.

' src=

September 14, 2021 at 8:25 am

Thank you JIm, Excellent concept teaching!

' src=

July 15, 2021 at 1:05 pm

Thank you so much for the Star Trek example! As a long-time Trek fan and Stats student, I absolutely love the debunking of the red shirt theory!

July 19, 2021 at 10:19 pm

I’m so glad you liked my example. I’m a life-long Trek fan as well! I found the red shirt question to be interesting. One the one hand, part of the answer of the answer is that red shirts comprise just over 50% of the crew, so of course they’ll have more deaths. And then on the other hand, it’s only certain red shirts that actually have an elevated risk, those in security.

' src=

May 16, 2021 at 1:42 pm

Got this response from the gentleman who did the calculation using a Chi Square. Would you mind commenting? “The numbers reported are nominate (counting) numbers not ordinate (measurement) numbers. As such chi-square analysis must be used to statistically compare outcomes. Two-sample student t-tests cannot be used for ordinate numbers. Correlations are also not usually used for ordinate numbers and most importantly correlations do NOT show cause and effect.”

May 16, 2021 at 3:13 pm

I agree with the first comment. However, please note that I recommended the 2-sample proportions test and the other person is mentioning the 2-sample t-test. Very different tests! And, I agree that the t-test is not appropriate for the Pfizer data. Basically, he’s saying you have categorical data and the t-test is for continuous data. That’s all correct. And that’s why I recommended the the proportions test.

As for the other part about “correlations do NOT show cause and effect.” That’s not quite correct. More accurately, you’d say that correlations do not NECESSARILY imply causation. Sometimes they do and sometimes they don’t imply causation. It depends on the context in which the data were collected. Correlations DO suggest causation when you use a randomized controlled trial (RCT) for the experiment and data collection, which is exactly what Pfizer did. Consequently, the Pfizer data DO suggest that the vaccine caused a reduction in the proportion of COVID infections in the vaccine group compared to the control group (no vaccine). RCTs are intentionally designed so you can draw causal inferences, which is why the FDA requires them for vaccine and other medical trials.

If you’re interested, I’ve written an article about why randomized controlled trials allow you to make causal inferences .

May 16, 2021 at 12:41 pm

Mr. Jim Frost…You are Da Man!! Thank you!! Yes, this is the same document I have been looking at, just did not know how to interpret Table 9. Sorry, never intended to ask you for medical advice, just wanted to understand the statistics and feel confident that the calculations were performed correctly. You have made my day! Now just a purely statistics question, assuming I have not worn out your patience with my dumb questions…Can you explain the criteria used to determine when a Chi Square should be used versus a 2-samples proportions test? I think I saw a comment from someone on your website stating that the Chi Sqaure is often misused in the medical field. Fascinating, fascinating field you are in. Thank you so much for sharing your knowledge and expertise.

May 16, 2021 at 3:00 pm

You bet! That’s why I’m here . . . to educate and clarify statistics and statistical analyses!

The chi-squared test of independence (or association) and the two-sample proportions test are related. The main difference is that the chi-squared test is more general while the 2-sample proportions test is more specific. And, it happens that the proportions test it more targeted at specifically the type of data you have.

The chi-squared test handles two categorical variables where each one can have two or more values. And, it tests whether there is an association between the categorical variables. However, it does not provide an estimate of the effect size or a CI. If you used the chi-squared test with the Pfizer data, you’d presumably obtain significant results and know that an association exists, but not the nature or strength of that association.

The two proportions test also works with categorical data but you must have two variables that each have two levels. In other words, you’re dealing with binary data and, hence, the binomial distribution. The Pfizer data you had fits this exactly. One of the variables is experimental group: control or vaccine. The other variable is COVID status: infected or not infected. Where it really shines in comparison to the chi-squared test is that it gives you an effect size and a CI for the effect size. Proportions and percentages are basically the same thing, but displayed differently: 0.75 vs. 75%.

What you’re interested in answering is whether the percentage (or proportion) of infections amongst those in the vaccinated group is significantly different than the percentage of infections for those in control group. And, that’s the exact question that the proportions test answers. Basically, it provides a more germane answer to that question.

With the Pfizer data, the answer is yes, those in the vaccinated group have a significantly lower proportion of infections than those in the control group (no vaccine). Additionally, you’ll see the proportion for each group listed, and the effect size is the difference between the proportion, which you can find on a separate line, along with the CI of the difference.

Compare that more specific and helpful answer to the one that chi-squared provides: yes, there’s an association between vaccinations and infections. Both are correct but because the proportions test is more applicable to the specific data at hand, it gives a more useful answer.

I see you have an additional comment with questions, so I’m off to that one!

May 15, 2021 at 1:00 pm

Hi Jim, So sorry if my response came off as anything but appreciative of your input. I tried to duplicate your results in your Flu Vaccine article using the 2 Proportion test as you recommended. I was able to duplicate your Estimate for Difference of -0.01942, but I could not duplicate your value for Z, so clearly I am not doing the calculation correctly – even when using Z calculators. So since I couldn’t duplicate your correct results for your flu example, I did not have confidence to proceed to Moderna. I was able to calculate effectiveness (the hazard ratio that is widely reported), but as I have reviewed the EUA documents presented to the FDA in December 2020, I know that there is no regression analysis, and most importantly, no data to show an antibody response produced by the vaccine. So they are not showing the vaccine was successful in producing an immune response, just giving simplistic proportions of how many got covid and how many didn’t. And as they did not even factor in the number of people who had had covid prior to vaccine, I just cant understand how these numbers have any significance at all. I mention the PCR test because it too is under an EUA, and has severe limitations. I would think that those limitations would be statistically significant, as are the symptoms which can indicate any bacterial or viral infection. And you state “I’m sure you can find a journal article or documentation that shows the thorough results if you’re interested”. Clearly I am VERY interested, as I love my parents more than life itself, and have seen the VAERS data, and I don’t want them to be the next statistic. But I CANT find the thorough results that you say are so easy to find. If I could I would not be trying to learn to perform statistical calculations. So I went out on a limb, as you are a fellow trekky and seem like a super nice guy, sharing your expertise with others, and thought you might be able to help me understand the statistics so I can help my parents make an informed choice. We are at a point that children and pregnant women are getting these vaccines. Unhealthy, elderly people in nursing homes (all the people excluded in the trials) are getting these vaccines. I simply ask the question…..do these vaccines provide more protection than NOT getting the vaccine? The ENTIRE POPULATION is being forced to get these vaccines. And you tell me “I’m sure you can find a journal article or documentation that shows the thorough results if you’re interested.” I can only ask…how are you NOT interested? This is the most important statistical question of our lifetime, and of your children’s and granchildren’s lifetime. And I find that no physician or statistician able or willing to answer these questions. Respectfully, Chris

May 15, 2021 at 11:00 pm

No worries. On my website, I’m just discussing the statistical nature of Moderna’s study. Of course, everyone is free to make their own determination and decide accordingly.

Pfizer data analyzed by a two-sample proportions test.

You’re obviously free to question the methods and analysis, but as a statistician, I’m satisfied that Moderna performed an appropriate clinical trial and followed that up with a rigorous and appropriate statistical analysis. In my opinion, they have demonstrated that their vaccine is safe and effective. The only caveat is that we don’t have long-term safety data because not enough time has gone by. However, most side effects for vaccines show up in the first 45 days. That timeframe occurred during the trial and all side effects were recorded.

However, I’m not going to get into a debate about whether anyone should get the vaccine or not. I run a statistics website and that’s the aspect I’m focusing on. There are other places to debate the merits of being vaccinated.

May 14, 2021 at 8:05 pm

Hi Jim, thanks for the reply. I have to admit the detail of all the statistical methods you mention are over my head, but by scanning the document it appears you did not actually calculate the vaccine’s efficacy, just stated how the analysis should be done. I am referring to comments like “To analyze the COVID-19 vaccine data, statisticians will use a stratified Cox proportional hazard regression model to assess the magnitude of the difference between treatment and control groups using a one-sided 0.025 significance level”. And “The full data and analyses are currently unavailable, but we can evaluate their interim analysis report. Moderna (and Pfizer) are still assessing the data and will present their analyses to Federal agencies in December 2020.” I am looking at the December 2020 reports that both Pfizer and Moderna presented to the FDA, and I see no “stratified Cox proportional hazard regression model”, just the simplistic hazard ratio you mention in your paper. I don’t see how that shows the results are statistically significant and not chance. Also the PCR test does not confirm disease, just presence of virus (dead or alive) and virus presence doesnt indicate disease. And the symptoms are symptoms of any viral or bacterial infection, or cancer. Just sort of suprised to see no statistical analysis in the December 2020 reports. Was hoping you had done the heavy lifting…lol

May 14, 2021 at 11:38 pm

Hi Christine,

You had asked if Chi-square would work for your data and my response was no, but here are two methods that would. No, I didn’t analyze the Moderna data myself. I don’t have access to their complete data that would allow me to replicate their results. However, in my post, I did calculate the effectiveness, which you can do using the numbers I had, but not the significance.

Based on the data you indicated you had, I’d recommend the two-sample proportions test that I illustrate in the flu vaccine post. That won’t replicate the more complex analyses but is doable with the data that you have.

The Cox proportional hazard regression model analyzes the hazard ratio. The hazard ratio is the outcome measure in this context. They’re tied together and it’s the regression analysis that indicate significance. I’d imagine you’d have to read a thorough report to get the nitty gritty details. I got the details of their analysis straight from Moderna.

I’m not sure what your point with the PCR test. But, I’m just reporting how they did their analysis.

Moderna, Pfizer, and the others have done the “heavy lifting.” When I wrote the post about the COVID vaccination, it was before it was approved for emergency use. By this point, I’m sure you can find a journal article or documentation that shows the thorough results if you’re interested.

May 14, 2021 at 2:56 pm

Hi Jim, my parents are looking into getting the Pfizer vaccine, and I was wondering if I could use a chi square analysis to see if its statistically effective. From the EUA document, 17411 people got the Pfizer vaccine, and of those people – 8 got covid, and 17403 did not. Of the control group of 17511 that did not get the vaccine, 162 got covid, and 17349 did not. My calculations show this is not statistically significant, but wasn’t sure if I did my calculation correctly, or if I can even use a chi square for this data. Can you help? PS. As a Trekky family, I love your analysis…but we all know its the new guy with a speaking part that gets axed…lol

May 14, 2021 at 3:28 pm

There are several ways you can analyze the effectiveness. I write about how they assessed the Moderna vaccine’s effectiveness , which uses a special type of regression analysis.

The other approach is to use a two-sample proportions test. I don’t write about that in the COVID context but I show how it works for flu vaccinations . The same ideas apply to COVID vaccinations. You’re dealing comparing the proportion of infections in the control group to the treatment group. Hence, a two-sample proportions test.

A chi-square analysis won’t get you where you want to go. It would tell you if there is an association, but it’s not going to tell you the effect size.

I’d read those two posts that I wrote. They’ll give you a good insight for possible ways to analyze the data. I also show how they calculate effectiveness for both the COVID and flu shots!

I hope that helps!

' src=

April 9, 2021 at 2:49 am

thank you so much for your response and advice! I will probably go for the logistic regression then 🙂

All the best for you!

April 10, 2021 at 12:39 am

You’re very welcome! Best of luck with your study! 🙂

April 7, 2021 at 4:18 am

thank you so much for your quick response! This actually helps me a lot and I also already thought about doing a binary logistic regression. However, my supervisor wanted me to use a chi-square test, as he thinks it is easier to perform and less work. So now I am struggling to decide, which option would be more feasible.

Coming back to the chi-square test – could I create a new variable which differentiates between the four experimental conditions and use this as a new ID? Or can I use the DV to weight the frequencies in the chi-square test? – I did that once in a analysis using a continuous DV as weight. Yet, I am not sure if or how that works with a binary variable. Do you have an idea what would work best in the case of a chi-square test?

Thank you so much!!

April 8, 2021 at 11:25 pm

You’re very welcome!

I don’t think either binary logistic regression or chi-square are more less work than the other. However, Chi-square won’t give you the answers you want. You can’t do interaction effects with chi-square. You won’t get nice odds ratios which are a much more intuitive way to interpret the results than chi-square, at least in my opinion. With chi-square, you don’t get a p-value/significance for each variable, just the overall analysis. With logistic regression, you get p-values for each variable and the interaction term if you include it.

I think you can do chi-square analyses with more than one independent variable. You’d essentially have a three dimensional table rather than a two-dimensional table. I’ve never done that myself so I don’t have much advice to offer you there. But, I strongly recommend using logistic regression. You’ll get results that are more useful.

April 6, 2021 at 10:59 am

thank you so much for this helpful post!

April 6, 2021 at 5:36 am

thank you for this very helpful post. Currently, I am working on my master’s thesis and I am struggling with identifying the right way to test my hypothesis as in my case I have three dummy variables (2 independent and 1 dependent).

The experiment was on the topic advice taking. It was a 2×2 between sample design manipulating the source of advice to be a human (0) or an algorithm (1) and the task to be easy (0) or difficult (1). Then, I measured whether the participants followed (1) or not followed (0) the advice. Now, I want to test if there is an interaction effect. In the easy task I expect that the participants rather follow the human advice and in the difficult task the participants rather follow the algorithmic advice.

I want to test this using a chi-square independence test, but I am not sure how to do that with three variables. Should I rather use the variable “Follow/Notfollow” as a weight or should I combine two of the variables so that I have a new variable with four categories, e.g. Easy.Human, Easy.Algorithm, Difficult.Human, Difficult.Algorithm or Human.Follow, Human.NotFollow, Algorithm.Follow, Algorithm.NotFollow

I am not sure, if this is scientifically correct. I would highly appreciate your help and your advice.

Thank you so much in advance! Best, Anni

April 7, 2021 at 1:58 am

I think using binary logistic regression would be your best bet. You can use your dummy DV with that type. And have two dummy IVs also works. You can also include an interaction term, which isn’t possible in chi-square tests. This model would tell you whether source of advice, difficulty of task, and their interaction relate to the probability of participants following the advice.

' src=

March 29, 2021 at 12:43 pm

Hi Jim, I want to thank you for all the content that you have posted online. It has been very helpful for me to apply simple principles of statistics at work. I wanted your thoughts on how to approach the following problem, which appeared to be slightly different from the examples that you shared above. We have two groups – test group (exposed to an ad for brand A) and control group (not exposed to any ads for brand A). We asked both groups a qn: Have you heard of brand A? The possible answers were a Y/N. We then did a t-test to determine if the answers were significantly different for the test and control groups (they were) We asked both groups a follow-up qn as well: How likely are you to buy any of the following brands in the next 3 months? The options were as follows (any one could be picked. B,C & D are competing brands with A) 1.A 2.B 3.C 4.D We wanted to check if the responses we received from both groups were statistically different. Based on my reading, it seemed like the Chi-Square test was the right one to run here. However, I wasn’t too sure what the categorical variables would be in this case and how we could run the Chi-square test here. Would like to get our inputs on how to approach this. Thanks

March 29, 2021 at 2:53 pm

For the first question, I’d typically recommend a 2-sample proportions test. You have two groups and the outcome variable is binary, which is good for proportions. Using a 2-sample proportions test will tell you whether the proportion of individuals who have heard of Brand A differs by the two groups (ads and no ads). You could use the chi-squared test of independence for this case but I recommend the proportions test because it’s designed specifically for this scenario. The procedure can also estimate the effect size and a CI for the effect size (depending on your software). A t-test is not appropriate for these data.

For the next question, yes, the chi-square test is good choice as long as they can only pick one of the options. Maybe, which brand are you most likely to purchase in the next several months. The categories must be mutually exclusive to use chi-square. One variable could be exposed to ad with yes and no as levels. The other would be the purchase question with A, B, C, D as levels. That gives you a 2 X 4 table for your chi-squared test of independence.

' src=

March 29, 2021 at 5:08 am

I don’t see the relationship between the table of shirt color and status and the tabulated statistics. Sam

March 29, 2021 at 3:39 pm

I show the relationship several ways in this post. The key is to understand how the actual counts compare to the expected counts. The analysis calculates the expected counts under the assumption that there is no relationship between the variables. Consequently, when there are differences between the actual and expected accounts, a relationship potentially exists.

In the Tabulated Statistics output, I circle and explain how the actual counts compare to the expected counts. Blue uniforms have fewer deaths than expected while Gold uniforms have more deaths than expected. Red uniforms equal the expect amount, although I explore that in more detail later in the post. You can also see these relationships in the graph titled Percentage of Fatalities.

Overall, the results show the relationship between uniform color and deaths and the p-value indicates that this relationship is statistically significant.

' src=

February 20, 2021 at 8:51 am

Suppose you have two variables that checking out books and means to get to the central library. How might you formulate null hypothesis and alternative hypothesis for the independence test? please answer anyone

February 21, 2021 at 3:15 pm

In this case, the null hypothesis states that there is no relationship between means to get to the library and checking out a book. The alternative hypothesis states that there is a relationship between them.

' src=

November 18, 2020 at 12:39 pm

Hi there I’m just wondering if it would be appropriate to use a Chi square test in the following scenario; – A data set of 1000 individuals – Calculate Score A for all 1000 individuals; results are continuous numerical data eg. 2.13, 3.16, which then allow individuals to be placed in categories; low risk (3.86) -Calculate Score B for the same 1000 individuals; results are discrete numerical data eg. 1, 6, 26 ,4 which the allow individuals to be placed in categories; low risk (26). – I then want to compared the two scoring systems A & B ; to see if (1) the individuals are scoring similarly on both scores (2) I have reason to believe one of the scores overestimates the risk, I’d like tot test this.

Thank you, I haven’t been able to find any similar examples and its stressing me out 🙁

' src=

November 13, 2020 at 1:53 pm

Would you be able to advise?

My organization is sending out 6 different emails to employees, in which they have to click on a link in the email. We want to see if one variation in language might get a higher click rate rate for the link. So we have 6 between subjects conditions, and the response can either be a ‘clicked on the link’ or ‘NOT clicked on the link’.

Is this a Chi-Square of Independence test? Also, how would I know where the difference lies, if the test is significant? (i.e., what is the non-parametric equivalent of running an ANOVA and followup pairwise comparisons?

Thanks Jim!

' src=

October 15, 2020 at 11:05 pm

I am working on the press coverage of civil military relations in Pakistani press from 2008 to 2018, I want to check that whether is a difference of coverage between two tenures ie 2008 to 2013 and 2013 to 2018. Secondly I want to check the difference of coverage between two types of newspapers ie english newspapers and urdu newspapers. furthermore I also want to check the category wise difference of coverage from the tenure 2008 to 2018.

I have divided my data into three different distributions, 1 is pro civilian, 2 is pro military and 3 is neutral.

' src=

October 4, 2020 at 4:07 am

Hi thank you so much for this. I would like to ask, if the study Is about whether factors such as pricing, marketing, and brand affects the intention of the buyer to purchase the product. Can I use Chi-test for the statistic treatment? and if it is not can I ask what statistical treatment would you suggest? Thank you so much again.

October 3, 2020 at 2:51 pm

Jim, Thank you for the post. You displayed a lot of creativity linking the two lessons to Star Trek. Your website and ebook offerings are very inspiring to me. Bill

October 4, 2020 at 12:53 am

Thanks so much, Bill. I really appreciate the kind words and I’m happy that the website and ebooks have been helpful!

' src=

September 29, 2020 at 7:10 am

Thank-you for your explanation. I am trying to help my son with his final school year investigation. He has raw data which he collected from 21 people of varying experience. They all threw a rugby ball at a target and the accuracy, time of ball in the air and experience (rated from 1-5) were all recorded. He has calculated the speed and the displacement, and used correlation to compare speed versus accuracy and experience versus accuracy. He needs to incrementally increase the difficulty of maths he uses in his analysis and he was thinking of the Chi Square test as a next step, however from your explanation above the current form of his data would not be suitable for this test. Is there a way of re-arranging the data so that we can use the Chi Square test? Thanks!

September 30, 2020 at 4:33 pm

Hi Rhonwen,

The chi-squared test of independence looks for correlation between categorical variables. From your description, I’m not seeing a good pair of categorical variables to test for correlation. To me, the next step for this data appears to be regression analysis.

' src=

September 12, 2020 at 5:37 pm

Thank you for the detailed teaching! I think this explains chi square much better than other websites I have found today. Do you mind sharing which software you use to get Expected Count and contribution to Chi square? Thank you for your help.

' src=

August 22, 2020 at 1:06 pm

Good day jim! I was wondering what kind of data analysis should i use if i am going to have a research on knowledge, attitude and practices? Looking forward to your reply! Thank you!

' src=

June 25, 2020 at 8:43 am

Very informative and easy to understand it. Thank you so much sir

' src=

June 2, 2020 at 11:03 am

Hi I wanted to know how the significance probability can be calculated if the significance level wasn’t given. Thank you

June 3, 2020 at 7:39 pm

Hi, you don’t need to know the significance level to be able to calculate the p-value. For calculating the p-value, you must know the null hypothesis, which we do for this example.

However, I do use a significance level of 0.05 for this example, making the results statistically significant.

' src=

May 26, 2020 at 5:55 am

What summary statistics can I use to describe the graph of a categorical data? Good presentation by the way. Very Insightful

May 26, 2020 at 8:39 pm

Hi Michael,

For categorical data like the type in this example, which is in a two-way contingency table, you’d often use counts or percentages. A bar chart is often a good choice for graphing counts or percentages by multiple categories. I show an example of graphing data for contingency tables in my Introduction to Statistics ebook .

' src=

May 25, 2020 at 10:27 am

Thank you for your answer. I saw online that bar graphs can be used to visualise the data (I guess it would be the percentage of death in my case) with 95% Ci intervals for the error bar. Is this also applicable if I only have a 2×2 contingency table? If not, what could be my error bar?

May 26, 2020 at 8:59 pm

Hi John, you can obtain CIs for proportions, which is basically a percentage. And, bar charts are often good for graphing contingency tables.

May 24, 2020 at 9:34 am

Hi! So I am working on this little project where I am trying to find a relationship between sex and mortality brought by this disease so my variables are: sex (male or female) and status (dead or alive). I am new to statistics so I do not know much. Is there any way to check the normality of categorical data? There is a part wherein our data must be based on data normality but I am not sure it this applies to categorical data. Thank you for your answer!

May 24, 2020 at 4:23 pm

The normal distribution is for continuous data. You have discrete data values–two binary variables to be precise. So, the normal distribution is not applicable to your data.

' src=

May 21, 2020 at 11:26 pm

Hi Jim, this was really helpful. I am in the midst of my proposal on a research to determine the association between burnout and physical activity among anaesthesia trainees.

They are both categorial variable physical activity – 3 categories: high, moderate, low burnout – 2 categories: high and low

How do I calculate my sample size for my study?

May 22, 2020 at 2:13 pm

Hi Jaishree,

I suggest you download a free sample size and power calculation program called G*Power . Then do the following:

  • In G*Power, under Test Family, choose, χ². Under Statistical test, choose Goodness-of-fit tests: Contingency tables.
  • In Effect size w, you’ll need to enter a value. 0.1 = weak. 0.3 medium, and 0.5 large. That’s based on subject area knowledge.
  • In β/α ratio, that’s the ratio of the Type II error rate/Type I error rate. They have a default value of 1, but that seems too low. 2-3 might be more appropriate but you can try different values to see how this affects the results.
  • Then you need to enter your sample size and DF. Read my post about Degrees of Freedom , which includes a section about calculating it for chi-square tests.
  • Click Calculate.

Experiment and adjust values to see how that changes the output. You want to find a sample size that produces sufficient power while incorporating your best estimates of the other parameters (effect size, etc.).

' src=

May 16, 2020 at 10:55 am

Learned so much from this post!! This was such a clear example that it is the first time for me that some statistic tests really make sense to me. Thank you so much for sharing your knowledge, Jim!!

' src=

May 5, 2020 at 11:46 am

the information that you have given here has been so useful to me – really understand it much better now. So, thank you very much! Just a quick question, how did you graph the contribution to chi-square statistics? Only, I’ve been using stata to do some data analysis and I’m not sure how it is that I would be able to create a graph like that for my own data. Any insight into that, that you can give would be extremely useful.

May 6, 2020 at 1:30 am

I used Minitab statistical software for the graphs. I think graphs often bring the data to life more than just a table of numbers.

' src=

March 20, 2020 at 2:38 pm

I have the results of two Exit Satisfaction Surveys related to two cohorts (graduates of 2017-18 and graduates of 2018-19). The information I received was just the “number” of ratings on each of the 5 points on the Likert Scale (e.g., 122 respondents Strongly Agreed to a given item). I changed the raw ratings into percentages for comparison, e.g., for Part A of the Survey (Proficiency and Knowledge in my major field), I calculated the minimum and maximum percentages on the Strongly Agree point and did the same for other points on the scale. My questions are (1) can I report the range of percentages on each point on the scale for each item or is it better to report an overall agreement/disagreement? and (2) what’s the best statistics to compare the satisfaction of the two cohorts in the same survey? The 2017-18 cohorts included 126, and the 2018-19 cohort included 296 graduates.

I checked out your Introduction to Statistics book that I purchased, but I couldn’t decide about the appropriate statistics for the analysis of each of the surveys as well as comparison of both cohorts.

My sincere thanks in advance for your time and advice,

All the best, Ellie

' src=

March 20, 2020 at 7:30 am

Thank you for an excellent post! I am myself will soon perform a Chi-square test of independence on survey responses with two variables, and now think it might be good to start with a 2 proportion test (is a Z-test with 2 proportions what you use in this example?). Since you don’t discuss whether the Star Trek data meets the assumptions of the two tests you use, I wonder if they share approximately the same assumptions? I have already made certain that my data may be used with the Chi-square (my data is by the way not necessarily normally distributed, and has unkown mean and variance), can I therefore be comfortable with using a 2 proportions Z-test too? I hope you have the time to help me out here!

' src=

February 18, 2020 at 8:53 am

Excellent post. Btw, is it similar to what they called Test of Association that uses contingency table? The way they compute for the expected value is (row total × column total)/(sample total) . And to check if there is a relationship between two variable, check if the calculate chi-squared value is greater that the critical value of the chi-squared. Is it just the same?

February 20, 2020 at 11:09 am

Hi Hephzibah,

Yes, they’re the same test–test of independence and test of association. I’ll add something to that effect to the article to make that more clear.

' src=

January 6, 2020 at 9:24 am

Jim, thanks for creating and publishing this great content. In the initial chi-square test for independence we determined that shirt color does have a relationship with death rate. The Pearson ch-square measurement is 6.189, is this number meaningful? How do we interpret this in plain english?

January 6, 2020 at 3:09 pm

There’s really no direct interpretation of the chi-square value. That’s the test statistic, similar to the t-value in t-tests and the F-value in F-tests. These values are placed in the chi-square probability distribution that has the specified degrees of freedom (df=2 for this example). By placing the value into the probability distribution, the procedure can calculate probabilities, such as the p-value. I’ve been meaning to write a post that shows how this works for chi-squared tests. I show how this works for t-tests and F-tests for one-way ANOVA . Read those to get an idea of the process. Of course, for this chi-squared test uses chi-squared as the test statistic and probability distribution.

I’ll write a post soon about how this test works, both in terms of calculating the chi-square value itself and then using it in the probability distribution.

' src=

January 5, 2020 at 7:28 am

Would Chi-squared test be the statistical test of choice, for comparing the incidence rates of disease X between two states? Many thanks.

January 6, 2020 at 1:20 am

Hi Michaela,

It sounds like you’d need to use a two-sample proportions test. I show an example of this test using real data in my post about the effective of flu vaccinations . The reason you’d need to use a proportions test is because your observed data are presumably binary (diseased/not diseased).

You could use the chi-squared test, but I think for your case the results are easier to understand using a two-sample proportions test.

' src=

June 3, 2019 at 6:57 pm

Lets say the expected salary for a position is 20,000 dollars. In our observed salary we have various figures a little above and below 20,000 and we want to do a hypothesis test. These salaries are ratio, so does that mean we cannot use Chi Square? Do we have to convert? How? In fact, when you run a chi square on the salary data Chi Square turns out to be very high, sort of off the Chi Square Critical Value chart.

June 3, 2019 at 10:28 pm

Chi-square analysis requires two or more categorical (nominal) variables. Salary is a continuous (ratio) variable. Consequently, you can’t use chi-square.

If you have the one continuous variable of salary and you want to determine whether the difference between the mean salary and $20,000 is statistically significant or not, you’d need to use a one-sample t-test. My post about the different forms of t-tests should be helpful for you.

April 13, 2019 at 4:23 am

I don’t know how to thank you for your detailed informative reply. And I am happy that a specialist like you found this study interesting yoohoo 🙂

As to your comment on how we (me and my graduate student whose thesis I am directing) tracked the errors from Sample writing 1 to 5 for each participant, We did it manually through a close content analysis. I had no idea of a better alternative since going through 25 pieces of writing samples needed meticulous comparison for each participant. I advised my student to tabulate the number, frequency, and type of errors for each participant separately so we could keep track of their (lack of) improvement depending on the participant’s proficiency level.

Do you have any suggestion to make it more rigorous?

Very many thanks, Ellie

April 10, 2019 at 11:52 am

Hi, Jim. I first decided to choose chi-square to analyze my data but now I am thinking of poisson regression since my dependent variable is ‘count.’. I want to see if there is any significant difference between Grade 10 students’ perceptions of their writing problems and the frequency of their writing errors in the five paragraphs they wrote. Here is the detailed situation:

1. Five sample paragraphs were collected from 5 students at 5 proficiency levels based on their total marks in English final exam in the previous semester (from Outstanding to Poor). 2. The students participated in an interview and expressed their perceptions of their problem areas in writing. 3. The students submitted their paragraphs every 2 weeks during the semester. 4. The paragraphs were marked based on the school’s marking rubrics. 5. Errors were categorized under five components (e.g., grammar, word choice, etc.). 6. Paragraphs were compared for measuring the students’ improvement by counting errors manually in each and every paragraph. 7. The students’ errors were also compared to their perceived problem areas to study the extent of their awareness of their writing problems. This comparison showed that students were not aware of a major part of their errors while their perceived errors were not necessarily observed in their writing samples. 8. Comparison of Paragraphs 1 and 5 for each student showed decrease in the number of errors in some language components while some errors still persisted. 9. I’m also interested to see if proficiency level has any impact on students’ perceptions of their real problem areas and the frequency of their errors in each language category.

My question is which test should be used to answer Qs 7 and 8? As to Q9, one of the dependent variables is count and the other one is nominal. One correlation I’m thinking is eta squared (interval-nominal) but for the proficiency-frequency I’m not sure.

My sincere apologies for this long query and many thanks for any clues to the right stats.

April 11, 2019 at 12:25 am

That sounds like a very interesting study!

I think that you’re correct to use some form of regression rather than chi-square. The chi-squared test of independence doesn’t work with counts within an observation. Chi-squared looks at the multiple characteristics of an observations and essentially places in a basket for that combination. For example, you have a red shirt/dead basket and a red-shirt/alive basket. The procedure looks at each observation and places it into one of the baskets. Then it counts the observations in each basket.

What you have are counts (of errors) within each observation. You want to understand that IVs that relate to those counts. That’s a regression thing. Now, what form of regression. Because it involves counts, Poisson regression is a good possibility. You might also read up on negative binomial regression, which is related. Sometimes you can have count data that doesn’t meet certain requirements of the Poisson distribution, but you can use Negative Binomial regression. For more information, look on page 321-322 of my ebook that you just bought! 🙂 I talk a bit about regression with counts.

And, there’s a chance that you might be able to use OLS regression. That depends on how you’re handling the multiple assessments and the average number of errors. The Poisson distribution begins to approximate the normal distribution at around a mean of 25-ish. If the number of errors tend to fall around here or higher, OLS might be the ticket! If you’re summing multiple observations together, that might help in this regard.

I don’t understand the design of how you’re tracking changing the number of errors over time, and how you’ll model that. You might included lagged values of errors to explain current errors, along with other possible IVs.

I found point number 7 to be really interesting. Is it that the blind spot allows the error to persist in greater numbers and that awareness of errors had reduced numbers of those types? Your interpretation of that should be very interesting!

Oh, and for the nominal dependent variable, use nominal logistic regression (p. 319-320)!

I hope this helps!

' src=

March 27, 2019 at 11:53 am

Thanks for your clear posts, Could you please give some insight like in T test and F test, how can we calculate a chi- square test statistic value and how to convert to p value?

March 29, 2019 at 12:26 am

I have that exact topic in mind for a future blog post! I’ll write one up similar to the t-test and F-test posts in the near future. It’s too much to do in the comments section, but soon an entire post for it! I’ll aim for sometime in the next couple of months. Stay tuned!

' src=

November 16, 2018 at 1:47 pm

This was great. 🙂

' src=

September 21, 2018 at 10:47 am

thanks i have learnt alot

' src=

February 5, 2018 at 4:26 pm

Hello, Thanks for the nice tutorial. Can you please explain how the ‘Expected count’ is being calculated in the table “tabulated statistics: Uniform color, Status” ?

February 5, 2018 at 10:25 pm

Hi Shihab, that’s an excellent question!

You calculate the expected value for each cell by first multiplying the column proportion by the row proportion that are associated with each cell. This calculation produces the expected proportion for that cell. Then, you take the expected proportion and multiply it by the total number of observations to obtain the expected count. Let’s work through an example!

I’ll calculate the expected value for wearing a Blue uniform and being Alive. That’s the top-left cell in the statistical output.

At the bottom of the Alive column, we see that 90.7% of all observations are alive. So, 0.907 is the proportion for the Alive column. The output doesn’t display the proportion for the Blue row, but we can calculate that easily. We can see that there are 136 total counts in the Blue row and there are 430 total crew members. Hence, the proportion for the Blue row is 136/430 = 0.31627.

Next, we multiply 0.907 * 0.31627 = 0.28685689. That’s the expected proportion that should fall in that Blue/Alive cell.

Now, we multiply that proportion by the total number of observations to obtain the expected count for that cell: 0.28685689 * 430 = 123.348

You can see in the statistical output that has been rounded up to 123.35.

You simply repeat that procedure for the rest of the cells.

' src=

January 18, 2018 at 2:29 pm

very nice, thanks

' src=

January 1, 2018 at 8:51 am

Amazing post!! In the tabulated statistics section, you ran a Pearson Chi Square and a Likelihood Ratio Chi Square test. Are both of these necessary and do BOTH have to fall below the significance level for the null to be rejected? I’m assuming so. I don’t know what the difference is between these two tests but I will look it up. That was the only part that lost me:)

January 2, 2018 at 11:16 am

Thanks again, Jessica! I really appreciate your kind words!

When the two p-values are in agreement (e.g., both significant or insignificant), that’s easy. Fortunately, in my experience, these two p-values usually do agree. And, as the sample size increases, the agreement between them also increases.

I’ve looked into what to do when they disagree and have not found any clear answers. This paper suggests that as long as all expected frequencies are at least 5, use the Pearson Chi-Square test. When it is less than 5, the article recommends an adjusted Chi-square test, which is neither of the displayed tests!

These tests are most likely to disagree when you have borderline results to begin with (near your significance level), and particularly when you have a small sample. Either of these conditions alone make the results questionable. If these tests disagree, I’d take it as a big warning sign that more research is required!

' src=

December 8, 2017 at 6:58 am

December 8, 2017 at 11:10 am

' src=

December 7, 2017 at 8:18 am

A good presentation. My experience with researchers in health sciences and clinical studies is that very often people do not bother about the hypotheses (null and alternate) but run after a p-value, more so with Chi-Square test of independence!! Your narration is excellent.

' src=

December 7, 2017 at 4:08 am

Helpful post. I can understand now

' src=

December 6, 2017 at 9:47 pm

Excellent Example, Thank you.

December 6, 2017 at 11:24 pm

You’re very welcome. I’m glad it was helpful!

Comments and Questions Cancel reply

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base
  • Chi-Square Goodness of Fit Test | Formula, Guide & Examples

Chi-Square Goodness of Fit Test | Formula, Guide & Examples

Published on May 24, 2022 by Shaun Turney . Revised on June 22, 2023.

A chi-square (Χ 2 ) goodness of fit test is a type of Pearson’s chi-square test . You can use it to test whether the observed distribution of a categorical variable differs from your expectations.

You recruit a random sample of 75 dogs and offer each dog a choice between the three flavors by placing bowls in front of them. You expect that the flavors will be equally popular among the dogs, with about 25 dogs choosing each flavor.

The chi-square goodness of fit test tells you how well a statistical model fits a set of observations. It’s often used to analyze genetic crosses .

Table of contents

What is the chi-square goodness of fit test, chi-square goodness of fit test hypotheses, when to use the chi-square goodness of fit test, how to calculate the test statistic (formula), how to perform the chi-square goodness of fit test, when to use a different test, practice questions and examples, other interesting articles, frequently asked questions about the chi-square goodness of fit test.

A chi-square (Χ 2 ) goodness of fit test is a goodness of fit test for a categorical variable . Goodness of fit is a measure of how well a statistical model fits a set of observations.

  • When goodness of fit is high , the values expected based on the model are close to the observed values.
  • When goodness of fit is low , the values expected based on the model are far from the observed values.

The statistical models that are analyzed by chi-square goodness of fit tests are distributions . They can be any distribution, from as simple as equal probability for all groups, to as complex as a probability distribution with many parameters.

  • Hypothesis testing

The chi-square goodness of fit test is a hypothesis test . It allows you to draw conclusions about the distribution of a population based on a sample. Using the chi-square goodness of fit test, you can test whether the goodness of fit is “good enough” to conclude that the population follows the distribution.

With the chi-square goodness of fit test, you can ask questions such as: Was this sample drawn from a population that has…

  • Equal proportions of male and female turtles?
  • Equal proportions of red, blue, yellow, green, and purple jelly beans?
  • 90% right-handed and 10% left-handed people?
  • Offspring with an equal probability of inheriting all possible genotypic combinations (i.e., unlinked genes)?
  • A Poisson distribution of floods per year?
  • A normal distribution of bread prices?
Observed and expected frequencies of dogs’ flavor choices
Garlic Blast 22 25
Blueberry Delight 30 25
Minty Munch 23 25

To help visualize the differences between your observed and expected frequencies, you also create a bar graph:

bar-graph-chi-square-test-goodness-of-fit

The president of the dog food company looks at your graph and declares that they should eliminate the Garlic Blast and Minty Munch flavors to focus on Blueberry Delight. “Not so fast!” you tell him.

You explain that your observations were a bit different from what you expected, but the differences aren’t dramatic. They could be the result of a real flavor preference or they could be due to chance.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

null and alternative hypothesis for chi square test

Like all hypothesis tests, a chi-square goodness of fit test evaluates two hypotheses: the null and alternative hypotheses. They’re two competing answers to the question “Was the sample drawn from a population that follows the specified distribution?”

  • Null hypothesis ( H 0 ): The population follows the specified distribution.
  • Alternative hypothesis ( H a ):   The population does not follow the specified distribution.

These are general hypotheses that apply to all chi-square goodness of fit tests. You should make your hypotheses more specific by describing the “specified distribution.” You can name the probability distribution (e.g., Poisson distribution) or give the expected proportions of each group.

  • Null hypothesis ( H 0 ): The dog population chooses the three flavors in equal proportions ( p 1 = p 2 = p 3 ).
  • Alternative hypothesis ( H a ): The dog population does not choose the three flavors in equal proportions.

The following conditions are necessary if you want to perform a chi-square goodness of fit test:

  • You want to test a hypothesis about the distribution of one categorical variable . If your variable is continuous , you can convert it to a categorical variable by separating the observations into intervals. This process is known as data binning.
  • The sample was randomly selected from the population .
  • There are a minimum of five observations expected in each group.
  • You want to test a hypothesis about the distribution of one categorical variable. The categorical variable is the dog food flavors.
  • You recruited a random sample of 75 dogs.
  • There were a minimum of five observations expected in each group. For all three dog food flavors, you expected 25 observations of dogs choosing the flavor.

The test statistic for the chi-square (Χ 2 ) goodness of fit test is Pearson’s chi-square:

Formula Explanation
is the chi-square test statistic is the summation operator (it means “take the sum of”) is the observed frequency is the expected frequency

The larger the difference between the observations and the expectations ( O − E in the equation), the bigger the chi-square will be.

To use the formula, follow these five steps:

Step 1: Create a table

Create a table with the observed and expected frequencies in two columns.

Garlic Blast 22 25
Blueberry Delight 30 25
Minty Munch 23 25

Step 2: Calculate O − E

Add a new column called “ O −  E ”. Subtract the expected frequencies from the observed frequency.

Garlic Blast 22 25 22 25 = 3
Blueberry Delight 30 25 5
Minty Munch 23 25 2

Step 3: Calculate ( O − E ) 2

Add a new column called “( O −  E ) 2 ”. Square the values in the previous column.

Garlic Blast 22 25 3 ( 3) = 9
Blueberry Delight 30 25 5 25
Minty Munch 23 25 2 4

Step 4: Calculate ( O − E ) 2 / E

Add a final column called “( O − E )² /  E “. Divide the previous column by the expected frequencies.

− )² / 
Garlic Blast 22 25 3 9 9/25 = 0.36
Blueberry Delight 30 25 5 25 1
Minty Munch 23 25 2 4 0.16

Step 5: Calculate Χ 2

Add up the values of the previous column. This is the chi-square test statistic (Χ 2 ).

Garlic Blast 22 25 3 9 9/25 = 0.36
Blueberry Delight 30 25 5 25 1
Minty Munch 23 25 2 4 0.16

The chi-square statistic is a measure of goodness of fit, but on its own it doesn’t tell you much. For example, is Χ 2 = 1.52 a low or high goodness of fit?

To interpret the chi-square goodness of fit, you need to compare it to something. That’s what a chi-square test is: comparing the chi-square value to the appropriate chi-square distribution to decide whether to reject the null hypothesis .

To perform a chi-square goodness of fit test, follow these five steps (the first two steps have already been completed for the dog food example):

Step 1: Calculate the expected frequencies

Sometimes, calculating the expected frequencies is the most difficult step. Think carefully about which expected values are most appropriate for your null hypothesis .

In general, you’ll need to multiply each group’s expected proportion by the total number of observations to get the expected frequencies.

Step 2: Calculate chi-square

Calculate the chi-square value from your observed and expected frequencies using the chi-square formula.

\begin{equation*}X^2 = \sum{\dfrac{(O-E)^2}{E}}\end{equation*}

Step 3: Find the critical chi-square value

Find the critical chi-square value in a chi-square critical value table or using statistical software. The critical value is calculated from a chi-square distribution. To find the critical chi-square value, you’ll need to know two things:

  • The degrees of freedom ( df ): For chi-square goodness of fit tests, the df is the number of groups minus one.
  • Significance level (α): By convention, the significance level is usually .05.

Step 4: Compare the chi-square value to the critical value

Compare the chi-square value to the critical value to determine which is larger.

Critical value = 5.99

Step 5: Decide whether the reject the null hypothesis

  • The data allows you to reject the null hypothesis and provides support for the alternative hypothesis.
  • The data doesn’t allow you to reject the null hypothesis and doesn’t provide support for the alternative hypothesis.

Whether you use the chi-square goodness of fit test or a related test depends on what hypothesis you want to test and what type of variable you have.

When to use the chi-square test of independence

There’s another type of chi-square test, called the chi-square test of independence .

  • Use the chi-square goodness of fit test when you have one categorical variable and you want to test a hypothesis about its distribution .
  • Use the chi-square test of independence when you have two categorical variables and you want to test a hypothesis about their relationship .

When to use a different goodness of fit test

The Anderson–Darling and Kolmogorov–Smirnov goodness of fit tests are two other common goodness of fit tests for distributions.

  • Use the Anderson–Darling or the Kolmogorov–Smirnov goodness of fit test when you have a continuous variable (that you don’t want to bin).
  • Use the chi-square goodness of fit test when you have a categorical variable (or a continuous variable that you want to bin).

Do you want to test your knowledge about the chi-square goodness of fit test? Download our practice questions and examples with the buttons below.

Download Word doc Download Google doc

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis

Methodology

  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

You can use the CHISQ.TEST() function to perform a chi-square goodness of fit test in Excel. It takes two arguments, CHISQ.TEST(observed_range, expected_range), and returns the p value .

You can use the chisq.test() function to perform a chi-square goodness of fit test in R. Give the observed values in the “x” argument, give the expected values in the “p” argument, and set “rescale.p” to true. For example:

chisq.test(x = c(22,30,23), p = c(25,25,25), rescale.p = TRUE)

Chi-square goodness of fit tests are often used in genetics. One common application is to check if two genes are linked (i.e., if the assortment is independent). When genes are linked, the allele inherited for one gene affects the allele inherited for another gene.

Suppose that you want to know if the genes for pea texture (R = round, r = wrinkled) and color (Y = yellow, y = green) are linked. You perform a dihybrid cross between two heterozygous ( RY / ry ) pea plants. The hypotheses you’re testing with your experiment are:

  • This would suggest that the genes are unlinked.
  • This would suggest that the genes are linked.

You observe 100 peas:

  • 78 round and yellow peas
  • 6 round and green peas
  • 4 wrinkled and yellow peas
  • 12 wrinkled and green peas

To calculate the expected values, you can make a Punnett square. If the two genes are unlinked, the probability of each genotypic combination is equal.

RRYY RrYy RRYy RrYY
RrYy rryy Rryy rrYy
RRYy Rryy RRyy RrYy
RrYY rrYy RrYy rrYY

The expected phenotypic ratios are therefore 9 round and yellow: 3 round and green: 3 wrinkled and yellow: 1 wrinkled and green.

From this, you can calculate the expected phenotypic frequencies for 100 peas:

Round and yellow 78 100 * (9/16) = 56.25
Round and green 6 100 * (3/16) = 18.75
Wrinkled and yellow 4 100 * (3/16) = 18.75
Wrinkled and green 12 100 * (1/16) = 6.21
Round and yellow 78 56.25 21.75 473.06 8.41
Round and green 6 18.75 −12.75 162.56 8.67
Wrinkled and yellow 4 18.75 −14.75 217.56 11.6
Wrinkled and green 12 6.21 5.79 33.52 5.4

Χ 2 = 8.41 + 8.67 + 11.6 + 5.4 = 34.08

Since there are four groups (round and yellow, round and green, wrinkled and yellow, wrinkled and green), there are three degrees of freedom .

For a test of significance at α = .05 and df = 3, the Χ 2 critical value is 7.82.

Χ 2 = 34.08

Critical value = 7.82

The Χ 2 value is greater than the critical value .

The Χ 2 value is greater than the critical value, so we reject the null hypothesis that the population of offspring have an equal probability of inheriting all possible genotypic combinations. There is a significant difference between the observed and expected genotypic frequencies ( p < .05).

The data supports the alternative hypothesis that the offspring do not have an equal probability of inheriting all possible genotypic combinations, which suggests that the genes are linked

The two main chi-square tests are the chi-square goodness of fit test and the chi-square test of independence .

A chi-square distribution is a continuous probability distribution . The shape of a chi-square distribution depends on its degrees of freedom , k . The mean of a chi-square distribution is equal to its degrees of freedom ( k ) and the variance is 2 k . The range is 0 to ∞.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Turney, S. (2023, June 22). Chi-Square Goodness of Fit Test | Formula, Guide & Examples. Scribbr. Retrieved July 30, 2024, from https://www.scribbr.com/statistics/chi-square-goodness-of-fit/

Is this article helpful?

Shaun Turney

Shaun Turney

Other students also liked, chi-square (χ²) tests | types, formula & examples, chi-square (χ²) distributions | definition & examples, chi-square test of independence | formula, guide & examples, what is your plagiarism score.

LEARN STATISTICS EASILY

LEARN STATISTICS EASILY

Learn Data Analysis Now!

LEARN STATISTICS EASILY LOGO 2

Understanding the Null Hypothesis in Chi-Square

The null hypothesis in chi square testing suggests no significant difference between a study’s observed and expected frequencies. It assumes any observed difference is due to chance and not because of a meaningful statistical relationship.

Introduction

The chi-square test is a valuable tool in statistical analysis. It’s a non-parametric test applied when the data are qualitative or categorical. This test helps to establish whether there is a significant association between 2 categorical variables in a sample population.

Central to any chi-square test is the concept of the null hypothesis. In the context of chi-square, the null hypothesis assumes no significant difference exists between the categories’ observed and expected frequencies. Any difference seen is likely due to chance or random error rather than a meaningful statistical difference.

  • The chi-square null hypothesis assumes no significant difference between observed and expected frequencies.
  • Failing to reject the null hypothesis doesn’t prove it true, only that data lacks strong evidence against it.
  • A p-value < the significance level indicates a significant association between variables.

 width=

Ad description. Lorem ipsum dolor sit amet, consectetur adipiscing elit.

Understanding the Concept of Null Hypothesis in Chi Square

The null hypothesis in chi-square tests is essentially a statement of no effect or no relationship. When it comes to categorical data, it indicates that the distribution of categories for one variable is not affected by the distribution of categories of the other variable.

For example, if we compare the preference for different types of fruit among men and women, the null hypothesis would state that the preference is independent of gender. The alternative hypothesis, on the other hand, would suggest a dependency between the two.

Steps to Formulate the Null Hypothesis in Chi-Square Tests

Formulating the null hypothesis is a critical step in any chi-square test. First, identify the variables being tested. Then, once the variables are determined, the null hypothesis can be formulated to state no association between them.

Next, collect your data. This data must be frequencies or counts of categories, not percentages or averages. Once the data is collected, you can calculate the expected frequency for each category under the null hypothesis.

Finally, use the chi-square formula to calculate the chi-square statistic. This will help determine whether to reject or fail to reject the null hypothesis.

Step Description
1. Identify Variables Determine the variables being tested in your study.
2. State the Null Hypothesis Formulate the null hypothesis to state that there is no association between the variables.
3. Collect Data Gather your data. Remember, this must be frequencies or counts of categories, not percentages or averages.
4. Calculate Expected Frequencies Under the null hypothesis, calculate the expected frequency for each category.
5. Compute Chi Square Statistic Use the chi square formula to calculate the chi square statistic. This will help determine whether to reject or fail to reject the null hypothesis.

Practical Example and Case Study

Consider a study evaluating whether smoking status is independent of a lung cancer diagnosis. The null hypothesis would state that smoking status (smoker or non-smoker) is independent of cancer diagnosis (yes or no).

If we find a p-value less than our significance level (typically 0.05) after conducting the chi-square test, we would reject the null hypothesis and conclude that smoking status is not independent of lung cancer diagnosis, suggesting a significant association between the two.

Observed Table

Smoking Status Cancer Diagnosis No Cancer Diagnosis
Smoker 70 30
Non-Smoker 20 80

Expected Table

Smoking Status Cancer Diagnosis No Cancer Diagnosis
Smoker 50 50
Non-Smoker 40 60

Common Misunderstandings and Pitfalls

One common misunderstanding is the interpretation of failing to reject the null hypothesis. It’s important to remember that failing to reject the null does not prove it true. Instead, it merely suggests that our data do not provide strong enough evidence against it.

Another pitfall is applying the chi-square test to inappropriate data. The chi-square test requires categorical or nominal data. Applying it to ordinal or continuous data without proper binning or categorization can lead to incorrect results.

The null hypothesis in chi-square testing is a powerful tool in statistical analysis. It provides a means to differentiate between observed variations due to random chance versus those that may signify a significant effect or relationship. As we continue to generate more data in various fields, the importance of understanding and correctly applying chi-square tests and the concept of the null hypothesis grows.

Recommended Articles

Interested in diving deeper into statistics? Explore our range of statistical analysis and data science articles to broaden your understanding. Visit our blog now!

  • Simple Null Hypothesis – an overview (External Link)
  • Chi-Square Calculator: Enhance Your Data Analysis Skills
  • Effect Size for Chi-Square Tests: Unveiling its Significance
  • What is the Difference Between the T-Test vs. Chi-Square Test?
  • Understanding the Assumptions for Chi-Square Test of Independence
  • How to Report Chi-Square Test Results in APA Style: A Step-By-Step Guide

Frequently Asked Questions (FAQs)

It’s a statistical test used to determine if there’s a significant association between two categorical variables.

The null hypothesis suggests no significant difference between observed and expected frequencies exists. The alternative hypothesis suggests a significant difference.

No, we never “accept” the null hypothesis. We only fail to reject it if the data doesn’t provide strong evidence against it.

Rejecting the null hypothesis implies a significant difference between observed and expected frequencies, suggesting an association between variables.

Chi-Square tests are appropriate for categorical or nominal data.

The significance level, often 0.05, is the probability threshold below which the null hypothesis can be rejected.

A p-value < the significance level indicates a significant association between variables, leading to rejecting the null hypothesis.

Using the Chi-Square test for improper data, like ordinal or continuous data, without proper categorization can lead to incorrect results.

Identify the variables, state their independence, collect data, calculate expected frequencies, and apply the Chi-Square formula.

Understanding the null hypothesis is essential for correctly interpreting and applying Chi-Square tests, helping to make informed decisions based on data.

Similar Posts

what is an independent variable in an experiment

What is an Independent Variable in an Experiment?

In Science, What is a Dependent Variable? Learn to differentiate them from independent variables and discover real-world applications.

sampling bias

Unraveling Sampling Bias: A Comprehensive Guide

Understand the complexities of ‘sampling bias’ in our comprehensive guide, detailing its impact on statistical analysis and data science.

Coefficient of Determination vs Coefficient of Correlation

Coefficient of Determination vs. Coefficient of Correlation in Data Analysis

Uncover the differences between the coefficient of determination vs coefficient of correlation and their crucial roles in data analysis.

statistic vs parameter

Statistics vs Parameters: A Comprehensive FAQ Guide

Discover the differences between statistics and parameters in data analysis, their applications, and how to effectively use and communicate these critical concepts.

how to report correlation in apa

How to Report Pearson Correlation Results in APA Style

Learn how to report correlation in APA style, mastering the key steps and considerations for clearly communicating research findings.

one way anova

Mastering One-Way ANOVA: A Comprehensive Guide for Beginners

Master One Way ANOVA with this guide, covering assumptions, effect sizes, post hoc tests, common mistakes, and best practices.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

null and alternative hypothesis for chi square test

Chi-Square Test of Independence

The Chi-Square test of independence is used to determine if there is a significant relationship between two nominal (categorical) variables.  The frequency of each category for one nominal variable is compared across the categories of the second nominal variable.  The data can be displayed in a contingency table where each row represents a category for one variable and each column represents a category for the other variable.  For example, say a researcher wants to examine the relationship between gender (male vs. female) and empathy (high vs. low).  The chi-square test of independence can be used to examine this relationship.  The null hypothesis for this test is that there is no relationship between gender and empathy.  The alternative hypothesis is that there is a relationship between gender and empathy (e.g. there are more high-empathy females than high-empathy males).

Calculate Chi Square Statistic by Hand

First we have to calculate the expected value of the two nominal variables.  We can calculate the expected value of the two nominal variables by using this formula:

null and alternative hypothesis for chi square test

N = total number

After calculating the expected value, we will apply the following formula to calculate the value of the Chi-Square test of Independence:

null and alternative hypothesis for chi square test

Degree of freedom is calculated by using the following formula: DF = (r-1)(c-1) Where DF = Degree of freedom r = number of rows c = number of columns

Need assistance with your research?

Schedule a time to speak with an expert using the calendar below.

Transform raw data to written interpreted results in seconds.

Null hypothesis: Assumes that there is no association between the two variables.

Alternative hypothesis: Assumes that there is an association between the two variables.

Hypothesis testing: Hypothesis testing for the chi-square test of independence as it is for other tests like ANOVA , where a test statistic is computed and compared to a critical value.  The critical value for the chi-square statistic is determined by the level of significance (typically .05) and the degrees of freedom.  The degrees of freedom for the chi-square are calculated using the following formula: df = (r-1)(c-1) where r is the number of rows and c is the number of columns. If the observed chi-square test statistic is greater than the critical value, the null hypothesis can be rejected.

Related Pages:

  • Conduct and Interpret the Chi-Square Test of Independence
  • Test of Independence: degrees of freedom
  • Take the course: Chi Square Test of Independence

Statistics Solutions can assist with your quantitative analysis by assisting you to develop your methodology and results chapters. The services that we offer include:

Data Analysis Plan

  • Edit your research questions and null/alternative hypotheses
  • Write your data analysis plan; specify specific statistics to address the research questions, the assumptions of the statistics, and justify why they are the appropriate statistics; provide references
  • Justify your sample size/power analysis, provide references
  • Explain your data analysis plan to you so you are comfortable and confident
  • Two hours of additional support with your statistician

Quantitative Results Section (Descriptive Statistics, Bivariate and Multivariate Analyses, Structural Equation Modeling , Path analysis, HLM, Cluster Analysis )

  • Clean and code dataset
  • Conduct descriptive statistics (i.e., mean, standard deviation, frequency and percent, as appropriate)
  • Conduct analyses to examine each of your research questions
  • Write-up results
  • Provide APA 7 th edition tables and figures
  • Explain Chapter 4 findings
  • Ongoing support for entire results chapter statistics

Please call 727-442-4290 to request a quote based on the specifics of your research, schedule using the calendar on this page, or email [email protected]

logo

Stats and R

Chi-square test of independence by hand.

  • Hypothesis test
  • Inferential statistics

Introduction

How the test works, observed frequencies, expected frequencies, test statistic, critical value, conclusion and interpretation.

null and alternative hypothesis for chi square test

Chi-square tests of independence test whether two qualitative variables are independent, that is, whether there exists a relationship between two categorical variables. In other words, this test is used to determine whether the values of one of the 2 qualitative variables depend on the values of the other qualitative variable.

If the test shows no association between the two variables (i.e., the variables are independent), it means that knowing the value of one variable gives no information about the value of the other variable. On the contrary, if the test shows a relationship between the variables (i.e., the variables are dependent), it means that knowing the value of one variable provides information about the value of the other variable.

This article focuses on how to perform a Chi-square test of independence by hand and how to interpret the results with a concrete example. To learn how to do this test in R, read the article “ Chi-square test of independence in R ”.

The Chi-square test of independence is a hypothesis test so it has a null ( \(H_0\) ) and an alternative hypothesis ( \(H_1\) ):

  • \(H_0\) : the variables are independent, there is no relationship between the two categorical variables. Knowing the value of one variable does not help to predict the value of the other variable
  • \(H_1\) : the variables are dependent, there is a relationship between the two categorical variables. Knowing the value of one variable helps to predict the value of the other variable

The Chi-square test of independence works by comparing the observed frequencies (so the frequencies observed in your sample) to the expected frequencies if there was no relationship between the two categorical variables (so the expected frequencies if the null hypothesis was true).

If the difference between the observed frequencies and the expected frequencies is small , we cannot reject the null hypothesis of independence and thus we cannot reject the fact that the two variables are not related . On the other hand, if the difference between the observed frequencies and the expected frequencies is large , we can reject the null hypothesis of independence and thus we can conclude that the two variables are related .

The threshold between a small and large difference is a value that comes from the Chi-square distribution (hence the name of the test). This value, referred as the critical value, depends on the significance level \(\alpha\) (usually set equal to 5%) and on the degrees of freedom. This critical value can be found in the statistical table of the Chi-square distribution. More on this critical value and the degrees of freedom later in the article.

For our example, we want to determine whether there is a statistically significant association between smoking and being a professional athlete. Smoking can only be “yes” or “no” and being a professional athlete can only be “yes” or “no”. The two variables of interest are qualitative variables so we need to use a Chi-square test of independence, and the data have been collected on 28 persons.

Note that we chose binary variables (binary variables = qualitative variables with two levels) for the sake of easiness, but the Chi-square test of independence can also be performed on qualitative variables with more than two levels. For instance, if the variable smoking had three levels: (i) non-smokers, (ii) moderate smokers and (iii) heavy smokers, the steps and the interpretation of the results of the test are similar than with two levels.

Our data are summarized in the contingency table below reporting the number of people in each subgroup, totals by row, by column and the grand total:

  Non-smoker Smoker Total
14 4 18
0 10 10
14 14 28

Remember that for the Chi-square test of independence we need to determine whether the observed counts are significantly different from the counts that we would expect if there was no association between the two variables. We have the observed counts (see the table above), so we now need to compute the expected counts in the case the variables were independent. These expected frequencies are computed for each subgroup one by one with the following formula:

\[\text{exp. frequencies} = \frac{\text{total # of obs. for the row} \cdot \text{total # of obs. for the column}}{\text{total number of observations}}\]

where obs. correspond to observations. Given our table of observed frequencies above, below is the table of the expected frequencies computed for each subgroup:

  Non-smoker Smoker Total
(18 * 14) / 28 = 9 (18 * 14) / 28 = 9 18
(10 * 14) / 28 = 5 (10 * 14) / 28 = 5 10
14 14 28

Note that the Chi-square test of independence should only be done when the expected frequencies in all groups are equal to or greater than 5. This assumption is met for our example as the minimum number of expected frequencies is 5. If the condition is not met, the Fisher’s exact test is preferred.

Talking about assumptions, the Chi-square test of independence requires that the observations are independent. This is usually not tested formally, but rather verified based on the design of the experiment and on the good control of experimental conditions. If you are not sure, ask yourself if one observation is related to another (if one observation has an impact on another). If not, it is most likely that you have independent observations.

If you have dependent observations (paired samples), the McNemar’s or Cochran’s Q tests should be used instead. The McNemar’s test is used when we want to know if there is a significant change in two paired samples (typically in a study with a measure before and after on the same subject) when the variables have only two categories. The Cochran’s Q tests is an extension of the McNemar’s test when we have more than two related measures.

We have the observed and expected frequencies. We now need to compare these frequencies to determine if they differ significantly. The difference between the observed and expected frequencies, referred as the test statistic (or t-stat) and denoted \(\chi^2\) , is computed as follows:

\[\chi^2 = \sum_{i, j} \frac{\big(O_{ij} - E_{ij}\big)^2}{E_{ij}}\]

where \(O\) represents the observed frequencies and \(E\) the expected frequencies. We use the square of the differences between the observed and expected frequencies to make sure that negative differences are not compensated by positive differences. The formula looks more complex than what it really is, so let’s illustrate it with our example. We first compute the difference in each subgroup one by one according to the formula:

  • in the subgroup of athlete and non-smoker: \(\frac{(14 - 9)^2}{9} = 2.78\)
  • in the subgroup of non-athlete and non-smoker: \(\frac{(0 - 5)^2}{5} = 5\)
  • in the subgroup of athlete and smoker: \(\frac{(4 - 9)^2}{9} = 2.78\)
  • in the subgroup of non-athlete and smoker: \(\frac{(10 - 5)^2}{5} = 5\)

and then we sum them all to obtain the test statistic:

\[\chi^2 = 2.78 + 5 + 2.78 + 5 = 15.56\]

The test statistic alone is not enough to conclude for independence or dependence between the two variables. As previously mentioned, this test statistic (which in some sense is the difference between the observed and expected frequencies) must be compared to a critical value to determine whether the difference is large or small. One cannot tell that a test statistic is large or small without putting it in perspective with the critical value.

If the test statistic is above the critical value, it means that the probability of observing such a difference between the observed and expected frequencies is unlikely. On the other hand, if the test statistic is below the critical value, it means that the probability of observing such a difference is likely. If it is likely to observe this difference, we cannot reject the hypothesis that the two variables are independent, otherwise we can conclude that there exists a relationship between the variables.

The critical value can be found in the statistical table of the Chi-square distribution and depends on the significance level, denoted \(\alpha\) , and the degrees of freedom, denoted \(df\) . The significance level is usually set equal to 5%. The degrees of freedom for a Chi-square test of independence is found as follow:

\[df = (\text{number of rows} - 1) \cdot (\text{number of columns} - 1)\]

In our example, the degrees of freedom is thus \(df = (2 - 1) \cdot (2 - 1) = 1\) since there are two rows and two columns in the contingency table (totals do not count as a row or column).

We now have all the necessary information to find the critical value in the Chi-square table ( \(\alpha = 0.05\) and \(df = 1\) ). To find the critical value we need to look at the row \(df = 1\) and the column \(\chi^2_{0.050}\) (since \(\alpha = 0.05\) ) in the picture below. The critical value is \(3.84146\) . 1

null and alternative hypothesis for chi square test

Chi-square table - Critical value for alpha = 5% and df = 1

Now that we have the test statistic and the critical value, we can compare them to check whether the null hypothesis of independence of the variables is rejected or not. In our example,

\[\text{test statistic} = 15.56 > \text{critical value} = 3.84146\]

Like for many statistical tests , when the test statistic is larger than the critical value, we can reject the null hypothesis at the specified significance level.

In our case, we can therefore reject the null hypothesis of independence between the two categorical variables at the 5% significance level.

\(\Rightarrow\) This means that there is a significant relationship between the smoking habit and being an athlete or not. Knowing the value of one variable helps to predict the value of the other variable.

Thanks for reading.

I hope the article helped you to perform the Chi-square test of independence by hand and interpret its results. If you would like to learn how to do this test in R, read the article “ Chi-square test of independence in R ”.

As always, if you have a question or a suggestion related to the topic covered in this article, please add it as a comment so other readers can benefit from the discussion.

For readers that prefer to check the \(p\) -value in order to reject or not the null hypothesis, I also created a Shiny app to help you compute the \(p\) -value given a test statistic. ↩︎

Related articles

  • Wilcoxon test in R: how to compare 2 groups under the non-normality assumption?
  • Correlation coefficient and correlation test in R
  • One-proportion and chi-square goodness of fit test
  • How to do a t-test or ANOVA for more than one variable at once in R?

Liked this post?

  • Get updates every time a new article is published (no spam and unsubscribe anytime):

Yes, receive new posts by email

  • Support the blog

FAQ Contribute Sitemap

Teach yourself statistics

Chi-Square Test of Independence

This lesson explains how to conduct a chi-square test for independence . The test is applied when you have two categorical variables from a single population. It is used to determine whether there is a significant association between the two variables.

For example, in an election survey, voters might be classified by gender (male or female) and voting preference (Democrat, Republican, or Independent). We could use a chi-square test for independence to determine whether gender is related to voting preference. The sample problem at the end of the lesson considers this example.

When to Use Chi-Square Test for Independence

The test procedure described in this lesson is appropriate when the following conditions are met:

  • The sampling method is simple random sampling .
  • The variables under study are each categorical .
  • If sample data are displayed in a contingency table , the expected frequency count for each cell of the table is at least 5.

This approach consists of four steps: (1) state the hypotheses, (2) formulate an analysis plan, (3) analyze sample data, and (4) interpret results.

State the Hypotheses

Suppose that Variable A has r levels, and Variable B has c levels. The null hypothesis states that knowing the level of Variable A does not help you predict the level of Variable B. That is, the variables are independent.

H o : Variable A and Variable B are independent.

H a : Variable A and Variable B are not independent.

The alternative hypothesis is that knowing the level of Variable A can help you predict the level of Variable B.

Note: Support for the alternative hypothesis suggests that the variables are related; but the relationship is not necessarily causal, in the sense that one variable "causes" the other.

Formulate an Analysis Plan

The analysis plan describes how to use sample data to accept or reject the null hypothesis. The plan should specify the following elements.

  • Significance level. Often, researchers choose significance levels equal to 0.01, 0.05, or 0.10; but any value between 0 and 1 can be used.
  • Test method. Use the chi-square test for independence to determine whether there is a significant relationship between two categorical variables.

Analyze Sample Data

Using sample data, find the degrees of freedom, expected frequencies, test statistic, and the P-value associated with the test statistic. The approach described in this section is illustrated in the sample problem at the end of this lesson.

DF = (r - 1) * (c - 1)

E r,c = (n r * n c ) / n

Χ 2 = Σ [ (O r,c - E r,c ) 2 / E r,c ]

  • P-value. The P-value is the probability of observing a sample statistic as extreme as the test statistic. Since the test statistic is a chi-square, use the Chi-Square Distribution Calculator to assess the probability associated with the test statistic. Use the degrees of freedom computed above.

Interpret Results

If the sample findings are unlikely, given the null hypothesis, the researcher rejects the null hypothesis. Typically, this involves comparing the P-value to the significance level , and rejecting the null hypothesis when the P-value is less than the significance level.

Test Your Understanding

A public opinion poll surveyed a simple random sample of 1000 voters. Respondents were classified by gender (male or female) and by voting preference (Republican, Democrat, or Independent). Results are shown in the contingency table below.

Voting Preferences Row total
Rep Dem Ind
Male 200 150 50 400
Female 250 300 50 600
Column total 450 450 100 1000

Is there a gender gap? Do the men's voting preferences differ significantly from the women's preferences? Use a 0.05 level of significance.

The solution to this problem takes four steps: (1) state the hypotheses, (2) formulate an analysis plan, (3) analyze sample data, and (4) interpret results. We work through those steps below:

H o : Gender and voting preferences are independent.

H a : Gender and voting preferences are not independent.

  • Formulate an analysis plan . For this analysis, the significance level is 0.05. Using sample data, we will conduct a chi-square test for independence .

DF = (r - 1) * (c - 1) = (2 - 1) * (3 - 1) = 2

E r,c = (n r * n c ) / n E 1,1 = (400 * 450) / 1000 = 180000/1000 = 180 E 1,2 = (400 * 450) / 1000 = 180000/1000 = 180 E 1,3 = (400 * 100) / 1000 = 40000/1000 = 40 E 2,1 = (600 * 450) / 1000 = 270000/1000 = 270 E 2,2 = (600 * 450) / 1000 = 270000/1000 = 270 E 2,3 = (600 * 100) / 1000 = 60000/1000 = 60

Χ 2 = Σ [ (O r,c - E r,c ) 2 / E r,c ] Χ 2 = (200 - 180) 2 /180 + (150 - 180) 2 /180 + (50 - 40) 2 /40     + (250 - 270) 2 /270 + (300 - 270) 2 /270 + (50 - 60) 2 /60 Χ 2 = 400/180 + 900/180 + 100/40 + 400/270 + 900/270 + 100/60 Χ 2 = 2.22 + 5.00 + 2.50 + 1.48 + 3.33 + 1.67 = 16.2

where DF is the degrees of freedom, r is the number of levels of gender, c is the number of levels of the voting preference, n r is the number of observations from level r of gender, n c is the number of observations from level c of voting preference, n is the number of observations in the sample, E r,c is the expected frequency count when gender is level r and voting preference is level c , and O r,c is the observed frequency count when gender is level r voting preference is level c .

The P-value is the probability that a chi-square statistic having 2 degrees of freedom is more extreme than 16.2. We use the Chi-Square Distribution Calculator to find P(Χ 2 > 16.2) = 0.0003.

  • Interpret results . Since the P-value (0.0003) is less than the significance level (0.05), we cannot accept the null hypothesis. Thus, we conclude that there is a relationship between gender and voting preference.

Note: If you use this approach on an exam, you may also want to mention why this approach is appropriate. Specifically, the approach is appropriate because the sampling method was simple random sampling, the variables under study were categorical, and the expected frequency count was at least 5 in each cell of the contingency table.

Chi square test

A chi-square test is a type of statistical hypothesis test that is used for populations that exhibit a chi-square distribution.

There are a number of different types of chi-square tests, the most commonly used of which is the Pearson's chi-square test. The Pearson's chi-square test is typically used for data that is categorical (types of data that may be divided into groups, e.g. age, race, sex, age), and may be used to test three types of comparison: independence, goodness of fit, and homogeneity. Most commonly, it is used to test for independence and goodness of fit. These are the two types of chi-square test discussed on this page. The procedure for conducting both tests follows the same general procedure, but certain aspects differ, such as the calculation of the test statistic and degrees of freedom, the conditions under which each test is used, the form of their null and alternative hypotheses, and the conditions for rejection of the null hypothesis. The general procedure for a chi-square test is as follows:

  • State the null and alternative hypotheses.
  • Select the significance level, α.
  • Calculate the test statistic (the chi-square statistic, χ 2 , for the observed value).
  • Determine the critical region for the selected level of significance and the appropriate degrees of freedom.
  • Compare the test statistic to the critical value, and reject or fail to reject the null hypothesis based on the result.

Chi-square goodness of fit test

The chi-square goodness of fit test is used to test how well a sample of data fits some theoretical distribution. In other words, it can be used to help determine how well a model actually reflects the data based on how close observed values are to what we would expect of values for a normally distributed model.

To conduct a chi-square goodness of fit test, it is necessary to first state the null and alternative hypotheses, which take the following form for this type of test:

H : The data follow a given distribution.
H : The data do not follow a given distribution.

Like other hypothesis tests, the significance level of the test is selected by the researcher. The chi-square statistic is then calculated using a sample taken from the relevant population. The sample is grouped into categories such that each category contains a certain number of observed values, referred to as the frequency for the category. As a rule of thumb, the expected frequency for a category should be at least 5 for the chi-square approximation to valid; it is not valid for small samples. The formula for the chi-square statistic, χ 2 , is shown below

where O i is the observed frequency for category i, E i is the observed frequency for category i, and n is the number of categories.

Once the test statistic has been calculated, the critical value for the selected level of significance can be determined using a chi-square table given that the degrees of freedom is n - 1. The value of the test statistic is then compared to the critical value, and if it is greater than the critical value, the null hypothesis is rejected in favor of the alternative hypothesis; if the value of the test statistic is less than the critical value, we fail to reject the null hypothesis.

Jennifer wants to know if a six-sided die she just purchased is fair (each side has an equal probability of occurring). She rolls the die 60 times and records the following outcomes:

Number rolled Frequency
1 13
2 7
3 14
4 6
5 15
6 5

Use a chi-square goodness of fit test with a significance level of α = 0.05 to test the fairness of the die.

The null and alternative hypotheses can be stated as follows:

H : the die is fair.
H : the die is not fair.

Since there is a 1/6 probability of any one of the numbers occurring on any given roll, and Jennifer rolled the die 60 times, she can expect to roll each face 10 times. Given the expected frequency, χ 2 can then be calculated as follows:

# Observed
frequency
Expected
frequency
O -E (O -E ) (O -E ) /E
1 13 10 3 9 0.9
2 7 10 -3 9 0.9
3 14 10 4 16 1.6
4 6 10 -4 16 1.6
5 15 10 5 25 2.5
6 5 10 -5 25 2.5
Sum 60 60 N/A N/A 10.0

Thus, χ 2 = 10. The degrees of freedom can be found as n - 1, or 6 - 1 = 5. Thus df = 5. Referencing an upper-tail chi-square table for a significance level of 0.05 and df = 5, the critical value, is 11.07. Since the test statistic is less than the critical value, we fail to reject the null hypothesis. Thus, there is insufficient evidence to suggest that the die is unfair at a significance level of 0.05. This is depicted in the figure below.

null and alternative hypothesis for chi square test

Chi-square test of independence

The chi-square test of independence is used to help determine whether the differences between the observed and expected values of certain variables of interest indicate a statistically significant association between the variables, or if the differences can be simply attributed to chance; in other words, it is used to determine whether the value of one categorical variable depends on that of the other variable(s). In this type of hypothesis test, the null and alternative hypotheses take the following form:

H : there is no statistically significant association between the two variables.
H : there is a statistically significant association between the two variables.

Though the chi-square statistic is defined similarly for both the test of independence and goodness of fit, the expected value for the test of independence is calculated differently, since it involves two variables rather than one. Let X and Y be the two variables being tested such that X has i categories and Y has j categories. The number of combinations of the categories for X and Y forms a contingency table that has i rows and j columns. Since we are assuming that the null hypothesis is true, and X and Y are independent variables, the expected value can be computed as

where n i is the total of the observed frequencies in the i th row, n j is the total of the observed frequencies in the j th column, and n is the sample size. χ 2 is then defined as

where O ij is the observed value in row i and column j , E ij is the expected value in row i and column j , p is the number of rows, and q is the number of columns in the contingency table. Also, note that p represents the number of categories for one of the variables while q represents the number of categories for the other variable.

For a chi-square test of independence, the degrees of freedom can be determined as:

df = (p - 1)(q - 1)

Once df is known, the critical value and critical region can be determined for the selected significance level, and we can either reject or fail to reject the null hypothesis based on the results. Specifically:

  • For an upper-tailed one-sided test, use a table of upper-tail critical values. If the test statistic is greater than the value in the column of the table corresponding to (1 - α), reject the null hypothesis.
  • For a lower-tailed one-sided test, use a table of lower-tail critical values. If the test statistic is less than the value in the column of the table corresponding to α, reject the null hypothesis.
  • Upper tail: if the test statistic is greater than the value in the column corresponding to (1 - α/2), reject the null hypothesis.
  • Lower tail: if the test statistic is less than the value in the column corresponding to α/2, reject the null hypothesis.

The figure below depicts the above criteria for rejection of the null hypothesis.

One-tailed tests
Upper-tailed test
Lower-tailed test
Two-tailed test

A survey of 500 people is conducted to determine whether there is a relationship between a person's sex and their favorite color. A choice of three colors (blue, red, green) was provided, and the results of the survey are shown in the contingency table below:

  Color
Sex Green Blue Red Row sum
Male 100 85 68 253
Female 77 65 105 247
Column sum 177 150 173 500

Conduct a chi-square test of independence to test whether there is a relationship between sex and color preference at a significance level of α = 0.05.

H : a person's favorite color is independent of their sex.
H : a person's favorite color is not independent of their sex.

E ij is computed for each row and column as follows:

  Color
Sex Green Blue Red
Male O = 100
E = 89.56
O = 85
E = 75.9
O = 68
E = 87.54
Female O = 77
E = 87.44
O = 65
E = 74.1
O = 105
E = 85.46

The chi-square statistic is then computed as:

 
 
 

The degrees of freedom is computed as:

df = (2 - 1)(3 - 1) = 2

Thus, using a chi-square table, the critical value for α = 0.05 and df = 2 is 5.99. Since the test statistic, χ 2 = 13.5, is greater than the critical value, it lies in the critical region, so we reject the null hypothesis in favor of the alternative hypothesis at a significance level of 0.05.

Tutorial Playlist

Statistics tutorial, everything you need to know about the probability density function in statistics, the best guide to understand central limit theorem, an in-depth guide to measures of central tendency : mean, median and mode, the ultimate guide to understand conditional probability, a comprehensive look at percentile in statistics, the best guide to understand bayes theorem, everything you need to know about the normal distribution, an in-depth explanation of cumulative distribution function, a complete guide to chi-square test.

What Is Hypothesis Testing in Statistics? Types and Examples

Understanding the Fundamentals of Arithmetic and Geometric Progression

The definitive guide to understand spearman’s rank correlation, mean squared error: overview, examples, concepts and more, all you need to know about the empirical rule in statistics, the complete guide to skewness and kurtosis, a holistic look at bernoulli distribution, all you need to know about bias in statistics, a complete guide to get a grasp of time series analysis.

The Key Differences Between Z-Test Vs. T-Test

The Complete Guide to Understand Pearson's Correlation

A complete guide on the types of statistical studies, everything you need to know about poisson distribution, your best guide to understand correlation vs. regression, the most comprehensive guide for beginners on what is correlation, what is a chi-square test formula, examples & application.

Lesson 9 of 24 By Avijeet Biswal

A Complete Guide to Chi-Square Test

Table of Contents

A statistical technique called chi-squared test (represented symbolically as χ²) is employed to examine discrepancies between the data distributions that are observed and those that are expected. Known also as Pearson's chi-squared test, it was developed in 1900 by Karl Pearson for the analysis of categorical data and distribution. Assuming the null hypothesis is correct, this test determines the probability that the observed frequencies in a sample match the predicted frequencies. The null hypothesis, which essentially suggests that any observed differences are the result of random chance, is a statement that suggests there is no substantial difference between the observed and predicted frequencies. Usually, the sum of the squared differences between the predicted and observed frequencies, normalized by the expected frequencies, over the sample variance is used to construct chi-squared tests. This test offers a means to test theories on the links between categorical variables by determining whether the observed deviations are statistically significant or can be attributable to chance.

The world is constantly curious about the Chi-Square test's application in machine learning and how it makes a difference. Feature selection is a critical topic in machine learning , as you will have multiple features in line and must choose the best ones to build the model. By examining the relationship between the elements, the chi-square test aids in the solution of feature selection problems. In this tutorial, you will learn about the chi-square test and its application.

What Is a Chi-Square Test?

The Chi-Square test is a statistical procedure for determining the difference between observed and expected data. This test can also be used to determine whether it correlates to the categorical variables in our data. It helps to find out whether a difference between two categorical variables is due to chance or a relationship between them.

Chi-Square Test Definition

A chi-square test is a statistical test that is used to compare observed and expected results. The goal of this test is to identify whether a disparity between actual and predicted data is due to chance or to a link between the variables under consideration. As a result, the chi-square test is an ideal choice for aiding in our understanding and interpretation of the connection between our two categorical variables.

A chi-square test or comparable nonparametric test is required to test a hypothesis regarding the distribution of a categorical variable. Categorical variables, which indicate categories such as animals or countries, can be nominal or ordinal. They cannot have a normal distribution since they can only have a few particular values.

For example, a meal delivery firm in India wants to investigate the link between gender, geography, and people's food preferences.

It is used to calculate the difference between two categorical variables, which are:

  • As a result of chance or
  • Because of the relationship

Your Data Analytics Career is Around The Corner!

Your Data Analytics Career is Around The Corner!

Formula For Chi-Square Test

Chi_Sq_formula.

c = Degrees of freedom

O = Observed Value

E = Expected Value

The degrees of freedom in a statistical calculation represent the number of variables that can vary in a calculation. The degrees of freedom can be calculated to ensure that chi-square tests are statistically valid. These tests are frequently used to compare observed data with data that would be expected to be obtained if a particular hypothesis were true.

The Observed values are those you gather yourselves.

The expected values are the frequencies expected, based on the null hypothesis. 

Fundamentals of Hypothesis Testing

Hypothesis testing is a technique for interpreting and drawing inferences about a population based on sample data. It aids in determining which sample data best support mutually exclusive population claims.

Null Hypothesis (H0) - The Null Hypothesis is the assumption that the event will not occur. A null hypothesis has no bearing on the study's outcome unless it is rejected.

H0 is the symbol for it, and it is pronounced H-naught.

Become a Data Science Expert & Get Your Dream Job

Become a Data Science Expert & Get Your Dream Job

What Does A Chi-Square Statistic Test Tell You?

A Chi-Square test ( symbolically represented as  2 ) is fundamentally a data analysis based on the observations of a random set of variables. It computes how a model equates to actual observed data. A Chi-Square statistic test is calculated based on the data, which must be raw, random, drawn from independent variables, drawn from a wide-ranging sample and mutually exclusive. In simple terms, two sets of statistical data are compared -for instance, the results of tossing a fair coin. Karl Pearson introduced this test in 1900 for categorical data analysis and distribution. This test is also known as ‘Pearson’s Chi-Squared Test’. 

Chi-Squared Tests are most commonly used in hypothesis testing. A hypothesis is an assumption that any given condition might be true, which can be tested afterwards. The Chi-Square test estimates the size of inconsistency between the expected results and the actual results when the size of the sample and the number of variables in the relationship is mentioned. 

These tests use degrees of freedom to determine if a particular null hypothesis can be rejected based on the total number of observations made in the experiments. Larger the sample size, more reliable is the result.

ypes of Chi-square Tests

There are two main types of Chi-Square tests namely -

Independence 

  • Goodness-of-Fit 

The Chi-Square Test of Independence is a derivable ( also known as inferential ) statistical test which examines whether the two sets of variables are likely to be related with each other or not. This test is used when we have counts of values for two nominal or categorical variables and is considered as non-parametric test. A relatively large sample size and independence of obseravations are the required criteria for conducting this test.

For Example- 

In a movie theatre, suppose we made a list of movie genres. Let us consider this as the first variable. The second variable is whether or not the people who came to watch those genres of movies have bought snacks at the theatre. Here the null hypothesis is that th genre of the film and whether people bought snacks or not are unrelatable. If this is true, the movie genres don’t impact snack sales. 

Learn All The Tricks Of The BI Trade

Learn All The Tricks Of The BI Trade

Goodness-Of-Fit

In statistical hypothesis testing, the Chi-Square Goodness-of-Fit test determines whether a variable is likely to come from a given distribution or not. We must have a set of data values and the idea of the distribution of this data. We can use this test when we have value counts for categorical variables. This test demonstrates a way of deciding if the data values have a “ good enough” fit for our idea or if it is a representative sample data of the entire population. 

Suppose we have bags of balls with five different colours in each bag. The given condition is that the bag should contain an equal number of balls of each colour. The idea we would like to test here is that the proportions of the five colours of balls in each bag must be exact. 

What Are Categorical Variables?

Categorical variables belong to a subset of variables that can be divided into discrete categories. Names or labels are the most common categories. These variables are also known as qualitative variables because they depict the variable's quality or characteristics.

Categorical variables can be divided into two categories:

  • Nominal Variable: A nominal variable's categories have no natural ordering. Example: Gender, Blood groups
  • Ordinal Variable: A variable that allows the categories to be sorted is ordinal variables. Customer satisfaction (Excellent, Very Good, Good, Average, Bad, and so on) is an example.

Why Do You Use the Chi-Square Test?

Chi-square is a statistical test that examines the differences between categorical variables from a random sample in order to determine whether the expected and observed results are well-fitting.

Here are some of the uses of the Chi-Squared test:

  • The Chi-squared test can be used to see if your data follows a well-known theoretical probability distribution like the Normal or Poisson distribution.
  • The Chi-squared test allows you to assess your trained regression model's goodness of fit on the training, validation, and test data sets.

Who Uses Chi-Square Analysis?

Chi-square is most commonly used by researchers who are studying survey response data because it applies to categorical variables. Demography, consumer and marketing research, political science, and economics are all examples of this type of research.

Let's say you want to know if gender has anything to do with political party preference. You poll 440 voters in a simple random sample to find out which political party they prefer. The results of the survey are shown in the table below:

chi-1.

To see if gender is linked to political party preference, perform a Chi-Square test of independence using the steps below.

Step 1: Define the Hypothesis

H0: There is no link between gender and political party preference.

H1: There is a link between gender and political party preference.

Step 2: Calculate the Expected Values

Now you will calculate the expected frequency.

Chi_Sq_formula_1.

For example, the expected value for Male Republicans is: 

Chi_Sq_formula_2

Similarly, you can calculate the expected value for each of the cells.

chi-2.

Step 3: Calculate (O-E)2 / E for Each Cell in the Table

Now you will calculate the (O - E)2 / E for each cell in the table.

chi-3.

Step 4: Calculate the Test Statistic X2

X2  is the sum of all the values in the last table

 =  0.743 + 2.05 + 2.33 + 3.33 + 0.384 + 1

Before you can conclude, you must first determine the critical statistic, which requires determining our degrees of freedom. The degrees of freedom in this case are equal to the table's number of columns minus one multiplied by the table's number of rows minus one, or (r-1) (c-1). We have (3-1)(2-1) = 2.

Finally, you compare our obtained statistic to the critical statistic found in the chi-square table. As you can see, for an alpha level of 0.05 and two degrees of freedom, the critical statistic is 5.991, which is less than our obtained statistic of 9.83. You can reject our null hypothesis because the critical statistic is higher than your obtained statistic.

This means you have sufficient evidence to say that there is an association between gender and political party preference.

Chi_Sq_formula_3

Practice Problems

1. voting patterns.

A researcher wants to know if voting preferences (party A, party B, or party C) and gender (male, female) are related. Apply a chi-square test to the following set of data:

  • Male: Party A - 30, Party B - 20, Party C - 50
  • Female: Party A - 40, Party B - 30, Party C - 30

To determine if gender influences voting preferences, run a chi-square test of independence.

2. State of Health

In a sample population, a medical study examines the association between smoking status (smoker, non-smoker) and the occurrence of lung disease (yes, no). The information is as follows:

  • Smoker: Yes - 90, No - 60
  • Non-smoker: Yes - 30, No - 120 

To find out if smoking status is related to the incidence of lung disease, do a chi-square test.

3. Consumer Preferences

Customers are surveyed by a company to determine whether their age group (under 20, 20-40, over 40) and their preferred product category (food, apparel, or electronics) are related. The information gathered is:

  • Under 20: Electronic - 50, Clothing - 30, Food - 20
  • 20-40: Electronic - 60, Clothing - 70, Food - 50
  • Over 40: Electronic - 30, Clothing - 40, Food - 80

Use a chi-square test to investigate the connection between product preference and age group

4. Academic Performance

An educational researcher looks at the relationship between students' success on standardized tests (pass, fail) and whether or not they participate in after-school programs. The information is as follows:

  • Yes: Pass - 80, Fail - 20
  • No: Pass - 50, Fail - 50

Use a chi-square test to determine if involvement in after-school programs and test scores are connected.

5. Genetic Inheritance

A geneticist investigates how a particular trait is inherited in plants and seeks to ascertain whether the expression of a trait (trait present, trait absent) and the existence of a genetic marker (marker present, marker absent) are significantly correlated. The information gathered is:

  • Marker Present: Trait Present - 70, Trait Absent - 30
  • Marker Absent: Trait Present - 40, Trait Absent - 60

Do a chi-square test to determine if there is a correlation between the trait's expression and the genetic marker.

How to Solve Chi-Square Problems

1. state the hypotheses.

  • Null hypothesis (H0): There is no association between the variables
  • Alternative hypothesis (H1): There is an association between the variables.

2. Calculate the Expected Frequencies

  • Use the formula: E=(Row Total×Column Total)Grand TotalE = \frac{(Row \ Total \times Column \ Total)}{Grand \ Total}E=Grand Total(Row Total×Column Total)​

3. Compute the Chi-Square Statistic

  • Use the formula: χ2=∑(O−E)2E\chi^2 = \sum \frac{(O - E)^2}{E}χ2=∑E(O−E)2​, where O is the observed frequency and E is the expected frequency.

4. Determine the Degrees of Freedom (df)

  • Use the formula: df=(number of rows−1)×(number of columns−1)df = (number \ of \ rows - 1) \times (number \ of \ columns - 1)df=(number of rows−1)×(number of columns−1)

5. Find the Critical Value and Compare

  • Use the chi-square distribution table to find the critical value for the given df and significance level (usually 0.05).
  • Compare the chi-square statistic to the critical value to decide whether to reject the null hypothesis.

These practice problems help you understand how chi-square analysis tests hypotheses and explores relationships between categorical variables in various fields.

When to Use a Chi-Square Test?

A Chi-Square Test is used to examine whether the observed results are in order with the expected values. When the data to be analysed is from a random sample, and when the variable is the question is a categorical variable, then Chi-Square proves the most appropriate test for the same. A categorical variable consists of selections such as breeds of dogs, types of cars, genres of movies, educational attainment, male v/s female etc. Survey responses and questionnaires are the primary sources of these types of data. The Chi-square test is most commonly used for analysing this kind of data. This type of analysis is helpful for researchers who are studying survey response data. The research can range from customer and marketing research to political sciences and economics. 

The Ultimate Ticket to Top Data Science Job Roles

The Ultimate Ticket to Top Data Science Job Roles

Chi-Square Distribution 

Chi-square distributions (X2) are a type of continuous probability distribution. They're commonly utilized in hypothesis testing, such as the chi-square goodness of fit and independence tests. The parameter k, which represents the degrees of freedom, determines the shape of a chi-square distribution.

A chi-square distribution is followed by very few real-world observations. The objective of chi-square distributions is to test hypotheses, not to describe real-world distributions. In contrast, most other commonly used distributions, such as normal and Poisson distributions, may explain important things like baby birth weights or illness cases per year.

Because of its close resemblance to the conventional normal distribution, chi-square distributions are excellent for hypothesis testing. Many essential statistical tests rely on the conventional normal distribution.

In statistical analysis , the Chi-Square distribution is used in many hypothesis tests and is determined by the parameter k degree of freedoms. It belongs to the family of continuous probability distributions . The Sum of the squares of the k independent standard random variables is called the Chi-Squared distribution. Pearson’s Chi-Square Test formula is - 

Chi_Square_Distribution_1

Where X^2 is the Chi-Square test symbol

Σ is the summation of observations

O is the observed results

E is the expected results 

The shape of the distribution graph changes with the increase in the value of k, i.e. degree of freedoms. 

When k is 1 or 2, the Chi-square distribution curve is shaped like a backwards ‘J’. It means there is a high chance that X^2 becomes close to zero. 

Courtesy: Scribbr

When k is greater than 2, the shape of the distribution curve looks like a hump and has a low probability that X^2 is very near to 0 or very far from 0. The distribution occurs much longer on the right-hand side and shorter on the left-hand side. The probable value of X^2 is (X^2 - 2).

When k is greater than ninety, a normal distribution is seen, approximating the Chi-square distribution.

Chi-Square P-Values

Here P denotes the probability; hence for the calculation of p-values, the Chi-Square test comes into the picture. The different p-values indicate different types of hypothesis interpretations. 

  • P <= 0.05 (Hypothesis interpretations are rejected)
  • P>= 0.05 (Hypothesis interpretations are accepted) 

The concepts of probability and statistics are entangled with Chi-Square Test. Probability is the estimation of something that is most likely to happen. Simply put, it is the possibility of an event or outcome of the sample. Probability can understandably represent bulky or complicated data. And statistics involves collecting and organising, analysing, interpreting and presenting the data. 

Finding P-Value

When you run all of the Chi-square tests, you'll get a test statistic called X2. You have two options for determining whether this test statistic is statistically significant at some alpha level:

  • Compare the test statistic X2 to a critical value from the Chi-square distribution table.
  • Compare the p-value of the test statistic X2 to a chosen alpha level.

Test statistics are calculated by taking into account the sampling distribution of the test statistic under the null hypothesis, the sample data, and the approach which is chosen for performing the test. 

The p-value will be as mentioned in the following cases.

  • A lower-tailed test is specified by: P(TS ts | H0 is true) p-value = cdf (ts)
  • Lower-tailed tests have the following definition: P(TS ts | H0 is true) p-value = cdf (ts)
  • A two-sided test is defined as follows, if we assume that the test static distribution  of H0 is symmetric about 0. 2 * P(TS |ts| | H0 is true) = 2 * (1 - cdf(|ts|))

P: probability Event

TS: Test statistic is computed observed value of the test statistic from your sample cdf(): Cumulative distribution function of the test statistic's distribution (TS)

Types of Chi-square Tests

Pearson's chi-square tests are classified into two types:

  • Chi-square goodness-of-fit analysis
  • Chi-square independence test

These are, mathematically, the same exam. However, because they are utilized for distinct goals, we generally conceive of them as separate tests.

Properties of Chi-Square Test 

  • Variance is double the times the number of degrees of freedom.
  • Mean distribution is equal to the number of degrees of freedom.
  • When the degree of freedom increases, the Chi-Square distribution curve becomes normal.

Limitations of Chi-Square Test

There are two limitations to using the chi-square test that you should be aware of. 

  • The chi-square test, for starters, is extremely sensitive to sample size. Even insignificant relationships can appear statistically significant when a large enough sample is used. Keep in mind that "statistically significant" does not always imply "meaningful" when using the chi-square test.
  • Be mindful that the chi-square can only determine whether two variables are related. It does not necessarily follow that one variable has a causal relationship with the other. It would require a more detailed analysis to establish causality.

Get In-Demand Skills to Launch Your Data Career

Get In-Demand Skills to Launch Your Data Career

Chi-Square Goodness of Fit Test

When there is only one categorical variable, the chi-square goodness of fit test can be used. The frequency distribution of the categorical variable is evaluated for determining whether it differs significantly from what you expected. The idea is that the categories will have equal proportions, however, this is not always the case.

When you want to see if there is a link between two categorical variables, you perform the chi-square test. To acquire the test statistic and its related p-value in SPSS, use the chisq option on the statistics subcommand of the crosstabs command. Remember that the chi-square test implies that each cell's anticipated value is five or greater.

In this tutorial titled ‘The Complete Guide to Chi-square test’, you explored the concept of Chi-square distribution and how to find the related values. You also take a look at how the critical value and chi-square value is related to each other.

If you want to gain more insight and get a work-ready understanding in statistical concepts and learn how to use them to get into a career in Data Analytics , our Post Graduate Program in Data Analytics in partnership with Purdue University should be your next stop. A comprehensive program with training from top practitioners and in collaboration with IBM, this will be all that you need to kickstart your career in the field. 

Was this tutorial on the Chi-square test useful to you? Do you have any doubts or questions for us? Mention them in this article's comments section, and we'll have our experts answer them for you at the earliest!

1) What is the chi-square test used for? 

The chi-square test is a statistical method used to determine if there is a significant association between two categorical variables. It helps researchers understand whether the observed distribution of data differs from the expected distribution, allowing them to assess whether any relationship exists between the variables being studied.

2) What is the chi-square test and its types? 

The chi-square test is a statistical test used to analyze categorical data and assess the independence or association between variables. There are two main types of chi-square tests: a) Chi-square test of independence: This test determines whether there is a significant association between two categorical variables. b) Chi-square goodness-of-fit test: This test compares the observed data to the expected data to assess how well the observed data fit the expected distribution.

3) What is the chi-square test easily explained? 

The chi-square test is a statistical tool used to check if two categorical variables are related or independent. It helps us understand if the observed data differs significantly from the expected data. By comparing the two datasets, we can draw conclusions about whether the variables have a meaningful association.

4) What is the difference between t-test and chi-square? 

The t-test and the chi-square test are two different statistical tests used for different types of data. The t-test is used to compare the means of two groups and is suitable for continuous numerical data. On the other hand, the chi-square test is used to examine the association between two categorical variables. It is applicable to discrete, categorical data. So, the choice between the t-test and chi-square test depends on the nature of the data being analyzed.

5) What are the characteristics of chi-square? 

The chi-square test has several key characteristics:

1) It is non-parametric, meaning it does not assume a specific probability distribution for the data.

2) It is sensitive to sample size; larger samples can result in more significant outcomes.

3) It works with categorical data and is used for hypothesis testing and analyzing associations.

4) The test output provides a p-value, which indicates the level of significance for the observed relationship between variables.

5)It can be used with different levels of significance (e.g., 0.05 or 0.01) to determine statistical significance.

Find our PL-300 Microsoft Power BI Certification Training Online Classroom training classes in top cities:

NameDatePlace
24 Aug -8 Sep 2024,
Weekend batch
Your City
7 Sep -22 Sep 2024,
Weekend batch
Your City
21 Sep -6 Oct 2024,
Weekend batch
Your City

About the Author

Avijeet Biswal

Avijeet is a Senior Research Analyst at Simplilearn. Passionate about Data Analytics, Machine Learning, and Deep Learning, Avijeet is also interested in politics, cricket, and football.

Recommended Resources

What Is Hypothesis Testing in Statistics? Types and Examples

Getting Started with Google Display Network: The Ultimate Beginner’s Guide

Sanity Testing Vs Smoke Testing: Know the Differences, Applications, and Benefits Of Each

Sanity Testing Vs Smoke Testing: Know the Differences, Applications, and Benefits Of Each

Fundamentals of Software Testing

Fundamentals of Software Testing

The Key Differences Between Z-Test Vs. T-Test

The Building Blocks of API Development

  • PMP, PMI, PMBOK, CAPM, PgMP, PfMP, ACP, PBA, RMP, SP, and OPM3 are registered marks of the Project Management Institute, Inc.

null and alternative hypothesis for chi square test

User Preferences

Content preview.

Arcu felis bibendum ut tristique et egestas quis:

  • Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
  • Duis aute irure dolor in reprehenderit in voluptate
  • Excepteur sint occaecat cupidatat non proident

Keyboard Shortcuts

S.4 chi-square tests, chi-square test of independence section  .

Do you remember how to test the independence of two categorical variables? This test is performed by using a Chi-square test of independence.

Recall that we can summarize two categorical variables within a two-way table, also called an r × c contingency table, where r = number of rows, c = number of columns. Our question of interest is “Are the two variables independent?” This question is set up using the following hypothesis statements:

 \[E=\frac{\text{row total}\times\text{column total}}{\text{sample size}}\]

We will compare the value of the test statistic to the critical value of \(\chi_{\alpha}^2\) with the degree of freedom = ( r - 1) ( c - 1), and reject the null hypothesis if \(\chi^2 \gt \chi_{\alpha}^2\).

Example S.4.1 Section  

Is gender independent of education level? A random sample of 395 people was surveyed and each person was asked to report the highest education level they obtained. The data that resulted from the survey are summarized in the following table:

  High School  Bachelors Masters Ph.d. Total
Female 60 54 46 41 201
Male 40 44 53 57 194
Total 100 98 99 98 395

Question : Are gender and education level dependent at a 5% level of significance? In other words, given the data collected above, is there a relationship between the gender of an individual and the level of education that they have obtained?

Here's the table of expected counts:

  High School  Bachelors Masters Ph.d. Total
Female 50.886 49.868 50.377 49.868 201
Male 49.114 48.132 48.623 48.132 194
Total 100 98 99 98 395

So, working this out, \(\chi^2= \dfrac{(60−50.886)^2}{50.886} + \cdots + \dfrac{(57 − 48.132)^2}{48.132} = 8.006\)

The critical value of \(\chi^2\) with 3 degrees of freedom is 7.815. Since 8.006 > 7.815, we reject the null hypothesis and conclude that the education level depends on gender at a 5% level of significance.

If you're seeing this message, it means we're having trouble loading external resources on our website.

If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.

To log in and use all the features of Khan Academy, please enable JavaScript in your browser.

AP®︎/College Statistics

Course: ap®︎/college statistics   >   unit 12.

  • Introduction to the chi-square test for homogeneity

Chi-square test for association (independence)

  • Expected counts in chi-squared tests with two-way tables
  • Test statistic and P-value in chi-square tests with two-way tables
  • Making conclusions in chi-square tests for two-way tables

null and alternative hypothesis for chi square test

Want to join the conversation?

  • Upvote Button navigates to signup page
  • Downvote Button navigates to signup page
  • Flag Button navigates to signup page

Great Answer

Video transcript

  • Write My Statistics Paper
  • Jamovi Assignment Help
  • Business Statistics Assignment Help
  • RapidMiner Assignment Help
  • Econometric Software Assignment Help
  • Econometrics Assignment Help
  • Game Theory Assignment Help
  • Take My Statistics Exam
  • Statistics Assignment Helper
  • Statistics Research Paper Assignment Help
  • Do My Statistics Assignment
  • Pay Someone to Do My Statistics Assignment
  • Professional Statistics Assignment Writers
  • Need Help with My Statistics Assignment
  • Hire Someone to Do My Statistics Assignment
  • Statistics Assignment Experts
  • Statistics Midterm Assignment Help
  • Statistics Capstone Project Help
  • Urgent Statistics Assignment Help
  • Take My Statistics Quiz
  • Professional Statistics Assignment Help
  • Statistics Assignment Writers
  • Best Statistics Assignment Help
  • Affordable Statistics Assignment Help
  • Final Year Statistics Project Help
  • Statistics Coursework Help
  • Reliable Statistics Assignment Help
  • Take My Statistics Test
  • Custom Statistics Assignment Help
  • SPSS Assignment Help
  • STATA Assignment Help
  • SAS Assignment Help
  • Excel Assignment Help
  • Statistical Hypothesis Formulation Assignment Help
  • LabVIEW Assignment Help
  • LISREL Assignment Help
  • Minitab Assignment Help
  • Analytica Software Assignment Help
  • Statistica Assignment Help
  • Design Expert Assignment Help
  • Orange Assignment Help
  • KNIME Assignment Help
  • WinBUGS Assignment Help
  • Statistix Assignment Help
  • Calculus Assignment Help
  • JASP Assignment Help
  • JMP Assignment Help
  • Alteryx Assignment Help
  • Statistical Software Assignment Help
  • GRETL Assignment Help
  • Apache Hadoop Assignment Help
  • XLSTAT Assignment Help
  • Linear Algebra Assignment Help
  • Data Analysis Assignment Help
  • Finance Assignment Help
  • ANOVA Assignment Help
  • Black Scholes Assignment Help
  • Experimental Design Assignment Help
  • CNN (Convolutional Neural Network) Assignment Help
  • Statistical Tests and Measures Assignment Help
  • Multiple Linear Regression Assignment Help
  • Correlation Analysis Assignment Help
  • Data Classification Assignment Help
  • Decision Making Assignment Help
  • 2D Geometry Assignment Help
  • Distribution Theory Assignment Help
  • Decision Theory Assignment Help
  • Data Manipulation Assignment Help
  • Binomial Distributions Assignment Help
  • Linear Regression Assignment Help
  • Statistical Inference Assignment Help
  • Structural Equation Modeling (SEM) Assignment Help
  • Numerical Methods Assignment Help
  • Markov Processes Assignment Help
  • Non-Parametric Tests Assignment Help
  • Multivariate Statistics Assignment Help
  • Data Flow Diagram Assignment Help
  • Bivariate Normal Distribution Assignment Help
  • Matrix Operations Assignment Help
  • CFA Assignment Help
  • Mathematical Methods Assignment Help
  • Probability Assignment Help
  • Kalman Filter Assignment Help
  • Kruskal-Wallis Test Assignment Help
  • Stochastic Processes Assignment Help
  • Chi-Square Test Assignment Help
  • Six Sigma Assignment Help
  • Hypothesis Testing Assignment Help
  • GAUSS Assignment Help
  • GARCH Models Assignment Help
  • Simple Random Sampling Assignment Help
  • GEE Models Assignment Help
  • Principal Component Analysis Assignment Help
  • Multicollinearity Assignment Help
  • Linear Discriminant Assignment Help
  • Logistic Regression Assignment Help
  • Survival Analysis Assignment Help
  • Nonparametric Statistics Assignment Help
  • Poisson Process Assignment Help
  • Cluster Analysis Assignment Help
  • ARIMA Models Assignment Help
  • Measures of central tendency
  • Time Series Analysis Assignment Help
  • Factor Analysis Assignment Help
  • Regression Analysis Assignment Help
  • Survey Methodology Assignment Help
  • Statistical Modeling Assignment Help
  • Survey Design Assignment Help
  • Linear Programming Assignment Help
  • Confidence Interval Assignment Help
  • Quantitative Methods Assignment Help

Hypothesis Testing Assignments: Breaking Down the Process into Actionable Steps

Riley Lloyd

Submit Your Hypothesis Testing Assignment

Get FREE Quote

Claim Your Offer

Unlock a fantastic deal at www.statisticsassignmenthelp.com with our latest offer. Get an incredible 20% off on your second statistics assignment, ensuring quality help at a cheap price. Our expert team is ready to assist you, making your academic journey smoother and more affordable. Don't miss out on this opportunity to enhance your skills and save on your studies. Take advantage of our offer now and secure top-notch help for your statistics assignments.

accept Master Card payments

  • Simplifying Hypothesis Testing Assignments with Actionable Steps

Understanding Hypothesis Testing

1. understand the problem statement, 2. formulate the hypotheses, 3. choose the significance level (α\alphaα), 4. select the appropriate test, 5. calculate the test statistic, 6. determine the p-value, 7. interpret the results, verify assumptions, use statistical software, practice with real data, seek help when needed.

Hypothesis testing is a fundamental component of statistics, essential for making informed decisions based on data. This statistical method is crucial for evaluating claims or theories about population parameters by analyzing sample data. Whether your assignment involves testing population means, comparing multiple groups, or examining proportions, having a clear understanding of hypothesis testing is vital. The process allows you to determine if there is enough evidence to support or reject a given hypothesis, thereby guiding your conclusions. By systematically applying hypothesis testing techniques, you can tackle various statistical problems with confidence. This blog aims to demystify hypothesis testing by outlining each step in a clear and actionable manner. From formulating hypotheses and choosing the right statistical tests to interpreting results and understanding p-values, you'll gain insights on how to approach and solve hypothesis testing assignment effectively. This structured approach will enhance your analytical skills and improve your performance in handling complex statistical tasks.

Simplifying-Hypothesis-Testing-Assignments

Hypothesis testing is a method used to make inferences or draw conclusions about a population based on sample data. It involves evaluating two opposing hypotheses:

  • Null Hypothesis (H0): This represents a statement of no effect or no difference. It assumes that any observed differences in the data are due to random chance.
  • Alternative Hypothesis (Ha): This represents a statement indicating the presence of an effect or a difference. It suggests that the observed differences are not due to chance but rather a real effect.

The essence of hypothesis testing lies in determining whether there is sufficient evidence in the sample data to reject the null hypothesis in favor of the alternative hypothesis.

Steps to Perform Hypothesis Testing

To effectively perform hypothesis testing, follow these essential steps:

The first step in hypothesis testing is to thoroughly understand the problem statement. Identify the key variables involved, the population parameter of interest, and the specific hypotheses that need to be tested. For instance, you might need to determine if a sample mean differs from a known population mean or if two groups differ significantly.

Once you have a clear understanding of the problem, formulate the null and alternative hypotheses:

  • Null Hypothesis (H0): This hypothesis assumes no change or no effect. It is the default position that any observed effect is due to chance.
  • Alternative Hypothesis (Ha ): This hypothesis represents what you are testing for. It suggests that there is an effect or difference.

For example:

  • Null Hypothesis: The mean amount of water in bottles is 1 gallon (H0:μ=1).
  • Alternative Hypothesis: The mean amount of water in bottles is not 1 gallon (Ha:μ≠1).

The significance level (α) is the threshold for determining whether the observed data is statistically significant. It represents the probability of rejecting the null hypothesis when it is actually true (Type I error). Common choices for α are 0.05, 0.01, and 0.10. A lower α value indicates a more stringent criterion for rejecting H0.

The type of test you use depends on the nature of your data and the hypothesis being tested:

  • z-test: Used when the sample size is large (typically n>30) or when the population variance is known.
  • t-test: Applied when the sample size is small (typically n≤30) and the population variance is unknown.
  • ANOVA (Analysis of Variance): Used when comparing means across multiple groups.
  • Chi-square Test: Utilized for categorical data to test proportions or frequencies.

Choose the test based on the characteristics of your data and the specific hypothesis.

The test statistic measures how far the sample statistic deviates from the null hypothesis. Different tests have different formulas for calculating the test statistic:

Test Statistic

Calculate the test statistic using the appropriate formula based on the test selected.

The p-value represents the probability of obtaining a test statistic as extreme as, or more extreme than, the one observed in the sample, assuming that the null hypothesis is true. It helps in deciding whether to reject the null hypothesis:

  • If p≤ α: Reject the null hypothesis. There is sufficient evidence to support the alternative hypothesis.
  • If p> α: Fail to reject the null hypothesis. There is not enough evidence to support the alternative hypothesis.

Interpreting the results involves translating the statistical findings into practical terms. Based on the p-value, conclude whether there is sufficient evidence to reject the null hypothesis. Provide a clear explanation of what the result means in the context of the problem.

Practical Tips for Hypothesis Testing

Navigating the complexities of hypothesis testing can be challenging, but certain strategies can streamline the process and improve accuracy. Here are some practical tips to enhance your hypothesis testing experience:

Before performing any test, ensure that the data meets the assumptions required for the test. For instance:

  • Normality: Many tests assume the data is normally distributed, especially for smaller sample sizes.
  • Independence: Observations should be independent of one another.
  • Equal Variances: When comparing means across groups, the variances should be roughly equal.

Checking these assumptions helps ensure the validity of your test results.

While understanding manual calculations is crucial, statistical software can simplify the process and reduce the likelihood of errors. Tools such as SPSS, R, or Python can handle complex calculations, provide additional insights, and generate visualizations to support your analysis.

Applying hypothesis testing to real datasets can enhance your understanding and problem-solving skills. Use datasets from your coursework or public repositories to practice. This hands-on experience helps reinforce concepts and builds confidence in your abilities.

If you encounter difficulties or uncertainties, don’t hesitate to seek help. Consult instructors, tutors, or online resources for guidance. Participating in study groups can also provide different perspectives and support.

Hypothesis testing is a vital statistical tool that enables you to draw informed conclusions and make decisions based on sample data. This methodical approach involves several key steps: understanding the problem at hand, formulating the null and alternative hypotheses, selecting the appropriate statistical test, calculating the test statistic, determining the p-value, and interpreting the results. Additionally, constructing confidence intervals can provide further insight into the range within which the population parameter is likely to fall. To effectively tackle and solve your statistics assignment , it is crucial to practice regularly, verify that the data meets the assumptions of the chosen test, utilize statistical software for complex calculations, and seek help whenever needed. Mastering these steps will not only enhance your statistical analysis skills but also ensure that you achieve accurate and reliable results in your assignments. By following this structured approach, you will be well-equipped to solve your statistics assignment with confidence and precision.

You Might Also Like

Our popular services.

When we carry out a chi-square goodness-of-fit test for a normal distribution, the null hypothesis states that the population:________ a. Does not have a normal distribution b. Has a normal distribution c. Has a chi-square distribution d. Does not have a chi-square distribution e. Has k-3 degrees of freedom

Step-by-step explanation:

The null hypothesis for a chi-square goodness of fit test states that the data are consistent with a specified distribution.

While the alternative hypothesis states that the data are not consistent with a specified distribution.

In this case study, the test is for a nose distribution. Thus the null hypothesis would be that the population has a normal distribution.

IMAGES

  1. PPT

    null and alternative hypothesis for chi square test

  2. Chi Square Test

    null and alternative hypothesis for chi square test

  3. PPT

    null and alternative hypothesis for chi square test

  4. PPT

    null and alternative hypothesis for chi square test

  5. PPT

    null and alternative hypothesis for chi square test

  6. Writing null hypothesis for chi square

    null and alternative hypothesis for chi square test

VIDEO

  1. Test of Hypothesis, Chi-Square distribution vvi 6th level,4th level bank exam

  2. Statistics: Testing of Hypothesis -Chi square test : Goodness of fit in Tamil

  3. Test of Hypothesis ( part

  4. Test of Hypothesis ( part

  5. Chi-Square Test for Association Independence Example TI-84 PLUS Hypotheses

  6. AP Bio: Mendelian Genetics, Chi-Square, Null Hypothesis, Part 2

COMMENTS

  1. Chi-Square Test of Independence: Definition, Formula, and Example

    A Chi-Square test of independence uses the following null and alternative hypotheses: H0: (null hypothesis) The two variables are independent. H1: (alternative hypothesis) The two variables are not independent. (i.e. they are associated) We use the following formula to calculate the Chi-Square test statistic X2: X2 = Σ (O-E)2 / E.

  2. Chi-Square (Χ²) Tests

    Example: Chi-square test of independence. Null hypothesis (H 0): The proportion of people who are left-handed is the same for Americans and Canadians. Alternative hypothesis (H A): The proportion of people who are left-handed differs between nationalities. Other types of chi-square tests

  3. Chi-Square Test of Independence

    Like all hypothesis tests, the chi-square test of independence evaluates a null and alternative hypothesis. The hypotheses are two competing answers to the question "Are variable 1 and variable 2 related?" ... Example: Null & alternative hypotheses The population is all households in the city. Null hypothesis (H 0): Whether a household ...

  4. 8.1

    To conduct this test we compute a Chi-Square test statistic where we compare each cell's observed count to its respective expected count. In a summary table, we have r × c = r c cells. Let O 1, O 2, …, O r c denote the observed counts for each cell and E 1, E 2, …, E r c denote the respective expected counts for each cell.

  5. Chi-square statistic for hypothesis testing

    And we got a chi-squared value. Our chi-squared statistic was six. So this right over here tells us the probability of getting a 6.25 or greater for our chi-squared value is 10%. If we go back to this chart, we just learned that this probability from 6.25 and up, when we have three degrees of freedom, that this right over here is 10%.

  6. Hypothesis Testing

    The null hypothesis in the χ 2 test of independence is often ... The distribution of the outcome is independent of the groups. The alternative or research hypothesis is that there is a difference in the distribution of responses to the ... The chi-square test of independence can also be used with a dichotomous outcome and the results are ...

  7. Chi-Square Test of Independence and an Example

    Like any statistical hypothesis test, the Chi-square test has both a null hypothesis and an alternative hypothesis. Null hypothesis: There are no relationships between the categorical variables. If you know the value of one variable, it does not help you predict the value of another variable. Alternative hypothesis: There are relationships ...

  8. Chi-Square Goodness of Fit Test

    Example: Chi-square goodness of fit test conditions. You can use a chi-square goodness of fit test to analyze the dog food data because all three conditions have been met: You want to test a hypothesis about the distribution of one categorical variable. The categorical variable is the dog food flavors. You recruited a random sample of 75 dogs.

  9. Chi-Square Goodness of Fit Test: Definition, Formula, and Example

    A Chi-Square goodness of fit test uses the following null and alternative hypotheses: H 0: ... 0.05, and 0.01) then you can reject the null hypothesis. Chi-Square Goodness of Fit Test: Example. A shop owner claims that an equal number of customers come into his shop each weekday. To test this hypothesis, an independent researcher records the ...

  10. 11.3

    The chi-square (\(\chi^2\)) test of independence is used to test for a relationship between two categorical variables. Recall that if two categorical variables are independent, then \(P(A) = P(A \mid B)\). ... Null hypothesis: Seat location and cheating are not related in the population. Alternative hypothesis: ...

  11. Lesson 8: Chi-Square Test for Independence

    As with all prior statistical tests we need to define null and alternative hypotheses. Also, as we have learned, the null hypothesis is what is assumed to be true until we have evidence to go against it. ... The Chi-Square test statistic is 22.152 and calculated by summing all the individual cell's Chi-Square contributions: \(4.584 + 0.073 + 4. ...

  12. Chi-Square Test: A Comprehensive Guide

    Next, you apply the Chi-Square Test to this data. The null hypothesis (H0) is that gender and shoe preference are independent. In contrast, the alternative hypothesis (H1) proposes that these variables are associated. After calculating the expected frequencies and the Chi-Square statistic, you compare this statistic with the critical value from ...

  13. Null Hypothesis in Chi Square: Understanding Now!

    The null hypothesis in chi-square tests is essentially a statement of no effect or no relationship. When it comes to categorical data, it indicates that the distribution of categories for one variable is not affected by the distribution of categories of the other variable. For example, if we compare the preference for different types of fruit ...

  14. Understanding the Chi-Square Test of Independence

    Hypotheses. Null hypothesis: Assumes that there is no association between the two variables. Alternative hypothesis: Assumes that there is an association between the two variables. Hypothesis testing: Hypothesis testing for the chi-square test of independence as it is for other tests like ANOVA, where a test statistic is computed and compared to a critical value.

  15. SPSS Tutorials: Chi-Square Test of Independence

    The null hypothesis (H 0) and alternative hypothesis (H 1) of the Chi-Square Test of Independence can be expressed in two different but equivalent ways: H 0: "[Variable 1] is independent of [Variable 2]" H 1: "[Variable 1] is not independent of [Variable 2]" OR.

  16. PDF The Chi Square Test

    Uses of the Chi-Square Test One of the most useful properties of the chi-square test is that it tests the null hypothesis "the row and column variables are not related to each other" whenever this hypothesis makes sense for a two-way variable. Uses of the Chi-Square Test Use the chi-square test to test the null hypothesis H 0

  17. Chi-square test of independence by hand

    Hypotheses. The Chi-square test of independence is a hypothesis test so it has a null (\(H_0\)) and an alternative hypothesis (\(H_1\)): \(H_0\): the variables are independent, there is no relationship between the two categorical variables. Knowing the value of one variable does not help to predict the value of the other variable

  18. Chi-Square Test of Independence

    This lesson explains how to conduct a chi-square test for independence. ... The null hypothesis states that knowing the level of Variable A does not help you predict the level of Variable B. That is, the variables are independent. ... The first step is to state the null hypothesis and an alternative hypothesis. H o: ...

  19. Chi square test

    Chi square test. A chi-square test is ... In this type of hypothesis test, the null and alternative hypotheses take the following form: H 0: ... If the test statistic is less than the value in the column of the table corresponding to α, reject the null hypothesis. For a two-sided test, use a table for upper-tail critical values for the upper ...

  20. What is a Chi-Square Test? [A Complete Guide]

    Discover the Chi-square test, its role in solving feature selection challenges, and gain insights into its formula, applications, and a practical example. ... Assuming the null hypothesis is correct, this test determines the probability that the observed frequencies in a sample match the predicted frequencies. ... Alternative hypothesis (H1 ...

  21. S.4 Chi-Square Tests

    Chi-Square Test Statistic. χ 2 = ∑ ( O − E) 2 / E. where O represents the observed frequency. E is the expected frequency under the null hypothesis and computed by: E = row total × column total sample size. We will compare the value of the test statistic to the critical value of χ α 2 with the degree of freedom = ( r - 1) ( c - 1), and ...

  22. When to Use a Chi-Square Test (With Examples)

    You should use the Chi-Square Goodness of Fit Test whenever you would like to know if some categorical variable follows some hypothesized distribution. Here are some examples of when you might use this test: Example 1: Counting Customers. A shop owner wants to know if an equal number of people come into a shop each day of the week, so he counts ...

  23. Chi-square test for association (independence)

    And oftentimes what we're doing is called a chi-squared test for independence. And then our alternative hypothesis would be our suspicion there is an association. There is an association. So, foot and hand length are not independent. So what we can then do is go to a population, and we can randomly sample it.

  24. Simplifying Hypothesis Testing Assignments with Actionable Steps

    The p-value represents the probability of obtaining a test statistic as extreme as, or more extreme than, the one observed in the sample, assuming that the null hypothesis is true. It helps in deciding whether to reject the null hypothesis: If p≤ α: Reject the null hypothesis. There is sufficient evidence to support the alternative hypothesis.

  25. When We Carry Out A Chi-square Goodness-of-fit Test For A Normal

    The null hypothesis for a chi-square goodness of fit test states that the data are consistent with a specified distribution. While the alternative hypothesis states that the data are not consistent with a specified distribution. In this case study, the test is for a nose distribution.