Linearity Concept Of Significance &
Chi Square Test
SUBMITTED TO:
PROF. YASMIN SULTANA SUBMTTED BY:
DEPARTMENT – PHARMACEUTICS SANIYA TAKKAR
SCHOOL OF PHARMACEUTICAL EDUCATION
AND RESEARCH M.PHARM,1ST SEM, DEPARTMENT OF
PHARMACEUTICS
JAMIA HAMDARD
TABLE OF CONTENTS
HISTORY
BASIS OF STATISTICAL INFERENCE
HYPOTHESIS
ZONE OF ACCEPTANCE AND REJECTION
TYPE 1 AND TYPE 2 ERROR
POWER OF TEST
CONFIDENCE LEVEL
EFFECT OF SAMPLE SIZE ON TEST
TEST OF SIGNIFICANCE
PARAMETRIC VS. NON PARAMETRIC TESTS
CHI SQUARE TEST
History
THE TERM STATISTICAL
SIGNIFICANCE WAS COINED BY
RONALD FISHER
Chi Square Test
Two non-parametric hypothesis tests
using the chi-square statistic: the chi-
square test for goodness of fit and the
chi-square test for independence.
Relation of Chi square test to
parametric and non parametric tests
• THE TERM “NON-PARAMETRIC” REFERS TO THE FACT THAT
THE CHI-SQUARE TESTS DO NOT REQUIRE ASSUMPTIONS
ABOUT POPULATION PARAMETERS NOR DO THEY TEST
HYPOTHESES ABOUT POPULATION PARAMETERS.
• PREVIOUS EXAMPLES OF HYPOTHESIS TESTS, SUCH AS THE T
TESTS AND ANALYSIS OF VARIANCE, ARE PARAMETRIC
TESTS AND THEY DO INCLUDE ASSUMPTIONS ABOUT
PARAMETERS AND HYPOTHESES ABOUT PARAMETERS.
Relation of Chi square test to parametric
and non parametric tests (Contd)
• THE DIFFERENCE BETWEEN THE CHI-SQUARE
TESTS AND THE OTHER HYPOTHESIS TESTS WE
HAVE CONSIDERED (T AND ANOVA) IS THE
NATURE OF THE DATA.
• FOR CHI-SQUARE, THE DATA ARE
FREQUENCIES RATHER THAN NUMERICAL
SCORES.
The Chi-Square Test for Goodness-
of-Fit
• THE CHI-SQUARE TEST FOR GOODNESS-OF-FIT USES
FREQUENCY DATA FROM A SAMPLE TO TEST HYPOTHESES
ABOUT THE SHAPE OR PROPORTIONS OF A POPULATION.
• EACH INDIVIDUAL IN THE SAMPLE IS CLASSIFIED INTO ONE
CATEGORY ON THE SCALE OF MEASUREMENT.
• THE DATA, CALLED OBSERVED FREQUENCIES, SIMPLY
COUNT HOW MANY INDIVIDUALS FROM THE SAMPLE ARE
IN EACH CATEGORY.
The Chi-Square Test for Goodness-of-Fit
(contd)
• THE NULL HYPOTHESIS SPECIFIES THE PROPORTION
OF THE POPULATION THAT SHOULD BE IN EACH
CATEGORY.
• THE PROPORTIONS FROM THE NULL HYPOTHESIS
ARE USED TO COMPUTE EXPECTED FREQUENCIES
THAT DESCRIBE HOW THE SAMPLE WOULD
APPEAR IF IT WERE IN PERFECT AGREEMENT WITH
THE NULL HYPOTHESIS.
The Chi-Square Test for Independence
The second chi-square test, the chi-square test for
independence, can be used and interpreted in two
different ways:
1. Testing hypotheses about the relationship
between two variables in a population, or
2. Testing hypotheses about differences between
proportions for two or more populations.
The Chi-Square Test for Independence
(Cont)
Although the two versions of the test for
independence appear to be different, they are
equivalent and they are interchangeable.
The first version of the test emphasizes the
relationship between chi-square and a correlation,
because both procedures examine the relationship
between two variables.
The Chi-Square Test for Independence
(Cont)
The second version of the test emphasizes
the relationship between chi-square and an
independent-measures t test (or ANOVA)
because both tests use data from two (or
more) samples to test hypotheses about the
difference between two (or more) populations.
The Chi-Square Test for Independence
(Cont)
For the goodness of fit test, the expected frequency for each category is
obtained by
expected frequency = fe = pn
(p is the proportion from the null hypothesis and n is the size of the sample)
For the test for independence, the expected frequency for each cell in the
matrix is obtained by
(row total)(column total)
expected frequency = fe = ─────────────────
n
The Chi-Square Test for Independence
(Cont)
A chi-square statistic is computed to measure the amount of
discrepancy between the ideal sample (expected frequencies
from H0) and the actual sample data (the observed
frequencies = fo).
A large discrepancy results in a large value for chi-square
and indicates that the data do not fit the null hypothesis and
the hypothesis should be rejected.
The Chi-Square Test for Independence
(Cont)
The calculation of chi-square is the same for all chi-square tests:
(fo – fe)
2
chi-square = χ2 = Σ ─────
fe
The fact that chi-square tests do not require scores from an interval or
ratio scale makes these tests a valuable alternative to the t tests, ANOVA,
or correlation, because they can be used with data measured on a
nominal or an ordinal scale.
Measuring Effect Size for the Chi-
Square Test for Independence
When both variables in the chi-square test for
independence consist of exactly two
categories (the data form a 2×2 matrix), it is
possible to re-code the categories as 0 and 1
for each variable and then compute a
correlation known as a phi-coefficient that
measures the strength of the relationship.