Non Parametric test PDF / PPT

Save (0)
Close

Recommended

Description

NON PARAMETRIC

     TESTS

HYPOTHESIS

■ Hypothesis is usually considered as the principal instrument in research. Its main

  function is to suggest new experiments and observations.

■ In fact, many experiments are carried out with the deliberate object of testing

  hypotheses. Decision-makers often face situations wherein they are interested in

  testing hypotheses on the basis of available information and then take decisions on

  the basis of such testing.

■ In social science, where direct knowledge of population parameter(s) is rare,

  hypothesis testing is the often used strategy for deciding whether a sample data

  offer such support for a hypothesis that generalisation can be made.

■ Thus hypothesis testing enables us to make probability statements about population

  parameter(s). The hypothesis may not be proved absolutely, but in practice it is

  accepted if it has withstood a critical testing. Before we explain how hypotheses are

  tested through different tests meant for the purpose, it will be appropriate to explain

  clearly the meaning of a hypothesis and the related concepts for better

  understanding of the hypothesis testing techniques.

Hypothesis may be defined as a proposition or a set of proposition set forth as an

explanation for the occurrence of some specified group of phenomena either asserted

merely as a provisional conjecture to guide some investigation or accepted as highly

probable in the light of established facts.

■ Null hypothesis and alternative hypothesis: In the context of statistical analysis, we

  often talk about null hypothesis and alternative hypothesis. If we are to compare

  method A with method B about its superiority and if we proceed on the assumption

  that both methods are equally good, then this assumption is termed as the null

  hypothesis.

■ As against this, we may think that the method A is superior or the method B is

  inferior, we are then stating what is termed as alternative hypothesis. The null

  hypothesis is generally symbolized as H0 and the alternative hypothesis as Ha.

  Suppose we want to test the hypothesis that the population mean bmg is equal to

  the hypothesised mean mH0

■ d i = 100.

■ Then we would say that the null hypothesis is that the population mean is equal to

  the hypothesised

■ mean 100 and symbolically we can express as:

■ H0 : m m = = H0 100

■ If our sample results do not support this null hypothesis, we should conclude that

  something else is true. What we conclude rejecting the null hypothesis is known as

  alternative hypothesis. In other words, the set of alternatives to the null hypothesis is

  referred to as the alternative hypothesis. If we accept H0, then we are rejecting Ha

  and if we reject H0, then we are accepting Ha. For

■ H0 : m m = = H0 100 , we may consider three possible alternative hypotheses as

  follows*:

■ In a statistical test, two kinds of assertions are involved viz., an assertion directly related

  to the purpose of investigation and other assertions to make a probability statement. The

  former is an assertion to be tested and is technically called a hypothesis, whereas the

  set of all other assertions is called the model. When we apply a test (to test the

  hypothesis) without a model, it is known as distribution-free test, or the nonparametric

  test.

■ Non-parametric tests do not make an assumption about the parameters of the

  population and thus do not make use of the parameters of the distribution.

■ In other words, under non-parametric or distribution-free tests we do not assume that a

  particular distribution is applicable, or that a certain value is attached to a parameter of

  the population.

■ For instance, while testing the two training methods, say A and B, for determining the

  superiority of one over the other, if we do not assume that the scores of the trainees are

  normally distributed or that the mean score of all trainees taking method A would be a

  certain value, then the testing method is known as a distribution-free or nonparametric

  method.

■ In fact, there is a growing use of such tests in situations when the normality assumption

  is open to doubt. As a result many distribution-free tests have been developed that do

  not depend on the shape of the distribution or deal with the parameters of the

  underlying population.

IMPORTANT NONPARAMETRIC OR

DISTRIBUTION-FREE TESTS

■ Test of a hypothesis concerning some single value for the given data (such as one-sample sign

    test).

■ (ii) Test of a hypothesis concerning no difference among two or more sets of data (such as two-

    sample sign test, Fisher-Irwin test, Rank sum test, etc.).

■ (iii) Test of a hypothesis of a relationship between variables (such as Rank correlation, Kendall’s

    coefficient of concordance and other tests for dependence.

■ (iv) Test of a hypothesis concerning variation in the given data i.e., test analogous to ANOVA viz.,

    Kruskal-Wallis test.

■ (v) Tests of randomness of a sample based on the theory of runs viz., one sample runs test.

■ (vi) Test of hypothesis to determine if categorical data shows dependency or if two classifications

    are independent viz., the chi-square test. The chi-square test can as well be used to make

    comparison between theoretical populations and actual data when categories are used.

Wilcoxon Matched-pairs Test (or Signed

Rank Test)

■ In various research situations in the context of two-related samples (i.e., case of

  matched paires such as a study where husband and wife are matched or when we

  compare the output of two similar machines or where some subjects are studied in

  context of before-after experiment) when we can determine both direction and

  magnitude of difference between matched values, we can use an important non-

  parametric test viz., Wilcoxon matched-paires test.

■ While applying this test, we first find the differences (di) between each pair of values

  and assign rank to the differences from the smallest to the largest without regard to

  sign. The actual signs of each difference are then put to corresponding ranks and

  the test statistic T is calculated which happens to be the smaller of the two sums

  viz., the sum of the negative ranks and the sum of the positive ranks.

■ While using this test, we may come across two types of tie situations. One situation

  arises when the two values of some matched pair(s) are equal i.e., the difference

  between values is zero in which case we drop out the pair(s) from our calculations.

■ The other situation arises when two or more pairs have the same difference value in

  which case we assign ranks to such pairs by averaging their rank positions. For

  instance, if two pairs have rank score of 5, we assign the rank of 5.5 i.e., (5 + 6)/2 =

  5.5 to each pair and rank the next largest difference as 7.

■ For this test, the calculated value of T must be equal to or smaller than the table

  value in order to reject the null hypothesis. In case the number exceeds 25, the

  sampling distribution of T is taken as approximately normal with mean UT = n(n +

  1)/4 and standard deviation.

■ Let us first write the null and alternative hypotheses as under:

■ H0: There is no difference between the perceived quality of two samples.

■ Ha: There is difference between the perceived quality of the two samples.

■ Using Wilcoxon matched-pairs test, we work out the value of the test statistic T as

  under:

■ The table value of T at five percent level of significance when n = 15 is 25 (using a

  two-tailed test because our alternative hypothesis is that there is difference between

  the perceived quality of the two samples). The calculated value of T is 18.5 which is

  less than the table value of 25.

■ As such we reject the null hypothesis and conclude that there is difference between the perceived quality of the two samples.

placabo 41 56 64 42 50 70 44 57 63

New 06 43 72 62 55 80 74 75 77 78

drug

Rank Sum Tests

■ Rank sum tests are a whole family of test, but we shall describe only two such tests commonly used viz., the U test and the H test. U test is popularly known as Wilcoxon-Mann-Whitney test, whereas H test is  also known as Kruskal-Wallis test. A brief description of the said two tests is given below:

■ (a) Wilcoxon-Mann-Whitney test (or U-test): This is a very popular test amongst the rank sum tests. This test is used to determine whether two independent samples have been drawn from the same population.

■ To perform this test, we first of all rank the data jointly, taking them as belonging to a

  single sample in either an increasing or decreasing order of magnitude.

■ We usually adopt low to high ranking process which means we assign rank 1 to an item

  with lowest value, rank 2 to the next higher item and so on.

■ In case there are ties, then we would assign each of the tied observation the mean of

  the ranks which they jointly occupy. For example, if sixth, seventh and eighth values are

  identical, we would assign each the rank (6 + 7 + 8)/3 = 7.

■ After this we find the sum of the ranks assigned to the values of the first sample (and

  call it R1) and also the sum of the ranks assigned to the values of the second sample

  (and call it R2).

■ Then we work out the test statistic i.e., U, which is a measurement of the difference

  between the ranked observations of the two samples as under:

■ In applying U-test we take the null hypothesis that the two samples come from

  identical populations. If this hypothesis is true,

 

 

 

 

■ Under the alternative hypothesis, the means of the two populations are not equal

  and if this is so, then most of the smaller ranks will go to the values of one sample

  while most of the higher ranks will go to those of the other sample.

■ If the null hypothesis that the n1 + n2 observations came from identical populations

  is true, the said ‘U’ statistic has a sampling distribution with

■ If n1 and n2 are sufficiently large (i.e., both greater than 8), the sampling

  distribution of U can be approximated closely with normal distribution and the limits

  of the acceptance region can be determined in the usual way at a given level of

  significance.

■ But if either n1 or n2 is so small that the normal curve approximation to the

  sampling distribution of U cannot be used,

■ Example:

■ The values in one sample are 53, 38, 69, 57, 46, 39, 73, 48, 73, 74, 60 and 78. In

  another sample they are 44, 40, 61, 52, 32, 44, 70, 41, 67, 72, 53 and 72. Test at

  the 10% level the hypothesis that they come from populations with the same mean.

  Apply U-test.

■ First of all we assign ranks to all observations, adopting low to high ranking process

  on thepresumption that all given items belong to a single sample. By doing so we get

  the following:

■ Sum :

■ Two samples with values 90, 94, 36 and 44 in one case and the other with values

  53, 39, 6, 24, and 33 are given. Test applying Wilcoxon test whether the two

  samples come from populations with the same mean at 10% level against the

  alternative hypothesis that these samples come from populations with different

  means.

• As the number of items in the two samples is less than 8, we cannot use the normal curve

approximation technique as stated above and shall use the table giving values of Wilcoxon’s distribution.

• To use this table, we denote ‘Ws’ as the smaller of the two sums and ‘Wl’ the larger. Also, let ‘s’ be

the number of items in the sample with smaller sum and let ‘l’ be the number of items in the sample

with the larger sum.

■ The value of Ws is 18 for sample two which has five items and as such s = 5.

■ We now find the difference between Ws and the minimum value it might have taken,

  given the value of s. The minimum value that Ws could have taken, given that s = 5, is

  the sum of ranks 1 through 5 and this comes as equal to 1 + 2 + 3 + 4 + 5 = 15.

■ Thus, (Ws – Minimum Ws) = 18 – 15 = 3. To determine the probability that a result as

  extreme as this or more so would occur,

■ we find the cell of the table which is in the column headed by the number 3 and in the

  row for s = 5 and l = 4 (the specified values of l are given in the second column of the

  table).

■ The entry in this cell is 0.056 which is the required probability of getting a value as small

  as or smaller than 3 and now we should compare it with the significance level of 10%.

■ Since the alternative hypothesis is that the two samples come from populations with

  different means, a two-tailed test is appropriate and accordingly 10% significance level

  will mean 5% in the left tail and 5% in the right tail. In other words, we should compare

  the calculated probability with the probability of 0.05, given the null hypothesis and the

  significance level.

■ If the calculated probability happens to be greater than 0.05 (which actually is so in the

  given case as 0.056 > 0.05), then we should accept the null hypothesis. Hence, in the

  given problem, we must conclude that the two samples come from populations with the

  same mean.

■ (The same result we can get by using the value of Wl. The only difference is that the

  value maximum Wl – Wl is required. Since for this problem, the maximum value of

  Wl (given s = 5 and l = 4) is the sum of 6 through 9 i.e., 6 + 7 + 8 + 9 = 30,

■ we have Max. Wl – Wl = 30 – 27 = 3 which is the same value that we worked out

  earlier as Ws, – Minimum Ws. All other things then remain the same as we have

  stated above).

■ Method 2: we can also find z value by using following equation: (smallest U)

The Kruskal-Wallis test (or H test):

■ This test is conducted in a way similar to the U test described above. This test is

  used to test the null hypothesis that ‘k’ independent random samples come from

  identical universes against the alternative hypothesis that the means of these

  universes are not equal.

■ This test is analogous to the one-way analysis of variance, but unlike the latter it

  does not require the assumption that the samples come from approximately normal

  populations or the universes having the same standard deviation.

■ In this test, like the U test, the data are ranked jointly from low to high or high to low

  as if they constituted a single sample. The test statistic is H for this test which is

  worked out as under:

 

 

 

 

               where n = n1 + n2 + … + nk and

               Ri being the sum of the ranks assigned to ni observations in the

               ith sample.

■ Now taking the null hypothesis that the bowler performs equally well with the four

  balls, we have the value of c2 = 7.815 for (k – 1) or 4 – 1 = 3 degrees of freedom at

  5% level of significance.

■ Since the calculated value of H is only 4.51 and does not exceed the c2 value of

  7.815, so we accept the null hypothesis and conclude that bowler performs equally

  well with the four bowling balls.

■ A pharma company wants to know if three groups of workers have different

  salaries:

 

Women: 23K, 41K, 54K, 66K, 78K.

Men: 45K, 55K, 60K, 70K, 72K

Minorities: 18K, 30K, 34K, 40K, 44K.

Friedman test

■ Friedman test is a non parametric statistical method developed by Dr. Milton

  Friedman.

■ The Friedman test is a non-parametric alternative to ANOVA with repeated

  measures.

■ It is used to test for differences between groups when the dependent variable being

  measured is ordinal.

■ The Friedman test tests the Null hypothesis of identical populations for dependent

  data.

■ The test is similar to the Kruskal-Wallis Test.

■ It uses only the rank information of the data.

■ Steps involved in testing

■ 1) Formulation of hypothesis

■ 2) Significance level

■ 3) Test statistics

■ 4) Calculations

■ 5) Critical region

■ 6) Conclusion

■ 1) Formulation of hypothesis

■ we check the equality of means of different treatments as in ANOVA, the null

  hypothesis will be stated as:

■ Ho= M1=M2=……=Mk

■ H1= not all medians are equal

■ 2) Level of significance:

■ it is selected as given if not given 0.05 is taken.

 

 

 

 

■ 3) Test statistics:

■ where R.j 2 is the square of the rank total for group j (j = 1, 2, . . . , c)

■ m is the number of independent blocks

■ k is the number of groups or treatment levels

■ 4) Calculations:

■ Start with n rows and k columns.

■ Rank order the entries of each row

■ Independently of the other rows.

■ Sum the ranks for each column.

■ Sum the squared column totals.

■ Using test statistic calculate the value of Q.

■ 5) Critical region:

■ Reject H0 if Q ≥ critical value at α= 5%

■ If the values of k and/or n exceed those given in tables, the significance of Q may be

  looked up in chi-squared (χ2) distribution tables with k-1 degrees of freedom.

■ 6) Conclusion:

■ If the value of Q is less than the critical value then we’ll not reject H0.

■ If the value of Q is greater than the critical value then we’ll reject H0.