Chi-Square Test

Chi-Square Test

In this section, we will understand Chi-Square Test.

📘

Chi-Square Distribution (\(\chi^2\)):
A random variable Q is said to follow a chi-square distribution with ’n’ degrees of freedom,i.e \(\chi^2(n)\),
if it is the sum of squares of ’n’ independent random variables that follow a standard normal distribution, i.e, \(N(0,1)\).

\[ Q = \chi^2(n) = \sum_{i=1}^n Z_i^2 \\ \text{ where: } Z_i \sim N(0,1) \\ \text{ n: degrees of freedom } \]

Key Properties:

  1. Non-negative, since sum of squares.
  2. Asymmetric, right skewed.
  3. Shape depends on the degrees of freedom; as \(\nu\) increases, the distribution becomes more symmetric and approaches a normal distribution.

📘

Chi-Square (\(\chi^2\)) Test Statistic:
It is formed by squaring the approximately standard normal counts above, and summing them up.
For \(k\) categories, the test statistic is:

\[ \chi_{calc}^2 = \sum_i Z_i^2 = \sum_{i=1}^k \frac{(O_i - E_i)^2}{E_i} \]

Note: For very large ’n’, the Pearson’s chi-square (\(\chi^2\)) test statistic follows a chi-square (\(\chi^2\)) distribution.



📘 Chi-Square (\(\chi^2\)) Test:
It is used to analyze categorical data to determine whether there is a significant difference between observed and expected counts.
It is a non-parametric test for categorical data, i.e, does NOT make any assumption about the underlying distribution of the data, such as, normally distributed with known mean and variance; only uses observed and expected count/frequencies.
Note: Requires a large sample size.

📘

Test of Goodness of Fit:
It is used to compare the observed frequency distribution of a single categorical variable to a hypothesized or expected probability distribution.
It can be used to determine whether a sample taken from a population follows a particular distribution, e.g., uniform, normal, etc.

Test Statistic:

\[ \chi_{calc}^2 = \sum_{i=1}^k \frac{(O_i - E_i)^2}{E_i} \]

\(O_{i}\): Observed count for \(i^{th}\) category
\(E_{i}\): Expected count for \(i^{th}\) category, under null hypothesis \(H_0\)
\(k\): Number of categories
\(\nu\): Degrees of freedom = k - 1- m
\(m\): Number of parameters estimated from sample data to determine the expected probability
Note: Typical m=0, since, NO parameters are estimated.

💡 In a coin toss experiment, we tossed a coin 100 times, and got 62 heads and 38 tails.
Find whether it is a fair coin (discrete uniform distribution test)?
Significance level = 5%

We need to find whether the coin is fair i.e we need to do a goodness of fit test for discrete uniform distribution.

Null Hypothesis \(H_0\): Coin is fair.
Alternative Hypothesis \(H_a\): Coin is biased towards head.

\(O_{H}\): Observed count head = 62
\(O_{T}\): Observed count head = 38
\(E_{i}\): Expected count for \(i^{th}\) category, under null hypothesis \(H_0\) = 50 i.e fair coin
\(k\): Number of categories = 2
\(\nu\): Degrees of freedom = k - 1- m = 2 - 1 - 0 = 1
Test Statistic:

\[ t_{obs} = \chi_{calc}^2 = \sum_{i=1}^2 \frac{(O_i - E_i)^2}{E_i} \\[10pt] = \frac{(62 - 50)^2}{50} + \frac{(38 - 50)^2}{50} \\[10pt] = \frac{144}{50} + \frac{144}{50} \\[10pt] => t_{obs} = 5.76 \]

Since, significance level = 5% = 0.05
Critical value = \(\chi^2(0.05,1)\) = 3.84

Since, \(t_{obs}\) = 5.76 > 3.84 (critical value), we reject the null hypothesis \(H_0\).
Therefore, the coin is biased towards head.


📘

Test of Independence:
It is used to determine whether an association exists between two categorical variables, using a contingency(dependency) table.
It is a non-parametric test, i.e, does NOT make any assumption about the underlying distribution of the data.

Test Statistic:

\[ \chi_{calc}^2 = \sum_{i=1}^R \sum_{i=1}^C \frac{(O_i - E_i)^2}{E_i} \]

\(O_{ij}\): Observed count for \(cell_{i,j}\)
\(E_{ij}\): Expected count for \(cell_{i,j}\), under null hypothesis \(H_0\)
\(R\): Number of rows
\(C\): Number of columns
\(\nu\): Degrees of freedom = (R-1)*(C-1)

Let’s understand the above test statistic in more detail.
We know that, if 2 random variables A & B are independent, then,
\(P(A \cap B) = P(A, B) = P(A)*P(B)\)
i.e Joint Probability = Product of marginal probabilities.

Null Hypothesis \(H_0\): \(A\) and \(B\) are independent.
Alternative Hypothesis \(H_a\): \(A\) and \(B\) are dependent or associated.
N = Sample size
\(P(A_i) \approx \frac{Row ~~ Total_i}{N}\)

\(P(B_j) \approx \frac{Col ~~ Total_j}{N}\)

\(E_{ij}\) : Expected count for \(cell_{i,j}\) = \( N*P(A_i)*P(B_j)\)

=> \(E_{ij}\) = \(N*\frac{Row ~~ Total_i}{N}*\frac{Col ~~ Total_j}{N}\)

=> \(E_{ij}\) = \(\frac{Row ~~ Total_i * Col ~~ Total_j}{N}\)

\(O_{ij}\): Observed count for \(cell_{i,j}\)


💡

A survey of 100 students was conducted to understand whether there is any relation between gender and beverage preference.
Below is the table that shows the number of students who prefer each beverage.

GenderTeaCoffee
Male203050
Female104050
3070

Significance level = 5%

Null Hypothesis \(H_0\): Gender and beverage preference are independent.
Alternative Hypothesis \(H_a\): Gender and beverage preference are dependent.

We know that Expected count for cell(i,j) = \(E_{ij}\) = \(\frac{Row ~~ Total_i * Col ~~ Total_j}{N}\)

\(E_{11} = \frac{50*30}{100} = 15\)

\(E_{12} = \frac{50*70}{100} = 35\)

\(E_{21} = \frac{50*30}{100} = 15\)

\(E_{22} = \frac{50*70}{100} = 35\)

Test Statistic:

\[ t_{obs} = \chi_{calc}^2 = \sum_{i=1}^R \sum_{i=1}^C \frac{(O_i - E_i)^2}{E_i} \\[10pt] = \frac{(20 - 15)^2}{15} + \frac{(30 - 35)^2}{35} + \frac{(10 - 15)^2}{15} + \frac{(40 - 35)^2}{35} \\[10pt] = \frac{25}{15} + \frac{25}{35} + \frac{25}{15} + \frac{25}{35} \\[10pt] => t_{obs} = \frac{50}{15} + \frac{50}{35} \approx 4.76 \]

Degrees of freedom = (R-1)(C-1) = (2-1)(2-1) = 1
Since, significance level = 5% = 0.05
Critical value = \(\chi^2(0.05,1)\) = 3.84

Since, \(t_{obs}\) = 4.76 > 3.84 (critical value), we reject the null hypothesis \(H_0\).
Therefore, the gender and beverage preference are dependent.




End of Section