Jump to a key chapter
- We will start by looking at the use of non-parametric tests in psychology and the application of non-parametric tests. To ensure your understanding, we will then look at non-parametric test examples.
- Then, we will delve into the non-parametric assumptions.
- Moving along, we will explore the difference between parametric and non-parametric tests.
- Finally, we will look at the advantages and disadvantages of non-parametric tests.
Non-Parametric Tests in Psychology
Non-parametric tests are used as an alternative when Parametric Tests cannot be carried out.
Non-parametric tests are also known as distribution-free tests. These are statistical tests that do not require normally-distributed data.
Non-parametric tests include the Kruskal-Wallis and the Spearman correlation. These are used when the alternative parametric tests (e.g. one-way ANOVA and Pearson correlation) cannot be carried out because the data doesn’t meet the required assumptions.
Application of Non-Parametric Tests
Non-parametric tests determine the value of data points by assigning + or - signs based on the data ranking. The analysis process involves numerically ordering data and identifying its ranking number.
Data is assigned a ‘+’ if it is greater than the reference value (where the value is expected/hypothesised to fall) and a ‘-’ if it is lower than the reference value. This ranked data becomes the data points for a non-parametric statistical analysis.
Non-Parametric Test: Examples
The example data set illustrates how non-parametric tests are ranked:
Data set: 25, 16, 6, 16, 30. The predicted reference value is 20.
X1 | X2 | X3 | X4 | X5 |
-6 | -16 | -16 | +25 | +30 |
The data is ranked numerically from the lowest (6) to the highest (30). As there are two instances of the value of 16, both are assigned a ranking of 2.5.
The predicted reference value is 20; therefore, 25 and 30 have positive values, and the rest have negative values.
Non-Parametric: Assumptions
Non-parametric tests are tests with fewer restrictions than parametric tests. It is appropriate to use non-parametric tests in research in different cases. For example:
When data is nominal, data is nominal when assigned to groups; these groups are distinct and have limited similarities (e.g. responses to ‘What is your ethnicity?’)
When data is ordinal, that is when data has a set order or scale (e.g. ‘Rate your anger from 1-10’.)
When there are outliers identified in the data set
When the data is collected from a small sample
However, it is important to note that non-parametric tests are also used when the following criteria can be assumed:
At least one violation of parametric tests assumptions. E.g., data should have similar homoscedasticity of variance: the amount of ‘noise’ (potential experimental errors) should be similar in each variable and between groups.
Non-normal distribution of data. In other words, data is likely skewed.
Randomness: data should be taken from a random sample from the target population.
Independence: the data from each participant in each variable should not be correlated; this means that measurements from a participant should not be influenced or associated with other participants.
Difference Between Parametric and Non-Parametric Tests
The table below shows examples of non-parametric tests. It includes their parametric test equivalent, the method of data analysis the test uses, and example research that is appropriate for each statistical test.
Non-parametric test | Equivalent parametric test | Purpose of statistical test | Example |
Wilcoxon rank-sum test | Paired t-test | Compares the mean value of two variables obtained from the same participants | The difference in depression scores before and after treatment |
Mann-Whitney U test | Unpaired t-test | Compares the mean value of a variable measured from two independent groups | The difference between depression symptom severity in a placebo and drug therapy group |
Spearman correlation | Pearson correlation | Measures the relationship (strength/direction) between two variables | The relationship between fitness test scores and the number of hours spent exercising |
Kruskal Wallis test | One-way analysis of variance (ANOVA) | Compares the mean of two or more independent groups (uses a between-subject design, and the independent variable needs to have three or more levels) | The difference in average fitness test scores of individuals who frequently exercise, moderately, or do not exercise |
Friedman’s ANOVA | One-way repeated measures ANOVA | Compares the mean of two or more dependent groups (uses a within-subject design, and the independent variable needs to have three or more levels) | The difference in average fitness test scores during the morning, afternoon, and evening |
Advantages of Non-Parametric Tests
Research using non-parametric tests has many advantages:
Statistical analysis uses computations based on signs or ranks. Thus, outliers in the data set are unlikely to affect the analysis.
They are appropriate to use even when the research sample size is small.
They are less restrictive than parametric tests as they don’t have to meet as many criteria or assumptions. Therefore, they can be applied to data in various situations.
They have more statistical power than parametric tests when the assumptions of parametric tests have been violated. This is because they use the median to measure the central tendency rather than the mean. Outliers are less likely to affect the median.
Many non-parametric tests have been a standard in psychology research for many years: the chi-square test, the Fisher exact probability test, and Spearman’s correlation test.
Disadvantages of Non-Parametric Tests
Non-parametric tests also have disadvantages that we should consider:
The mean is considered the best and a standard measure of central tendency because it uses all the data points within the data set for analysis. If data values change, then the mean calculated will also change. However, this is not always the case when calculating the median.
As these tests don’t tend to be vastly affected by outliers, there is an increased likelihood of research carrying out a Type 1 error (essentially a ‘false positive’, rejecting the null hypothesis when it should be accepted). This reduces the validity of the findings.
Non-parametric tests are considered appropriate for hypothesis testing only, as they do not calculate or estimate effect sizes (a quantitative value that tells you how much two variables are related) or confidence intervals. This means that researchers cannot identify how much the independent variable affects the dependent variable and how significant these findings are. Therefore, the utility of the results is limited, and their validity is also challenging to establish.
Non-Parametric Tests - Key takeaways
- Non-parametric tests are also known as distribution-free tests. These are statistical tests that do not require normally distributed data.
- Non-parametric tests determine the value of data points by assigning + or - signs based on the data ranking. The analysis process involves numerically ordering data and identifying their rank number. This ranked data is used as data points for non-parametric statistical analysis.
Examples of non-parametric tests are the Wilcoxon Rank sum test, Mann-Whitney U test, Spearman correlation, Kruskal Wallis test, and Friedman’s ANOVA test. All of these tests have alternative parametric tests.
Non-parametric tests are only used when the assumptions of parametric tests have been violated due to their restrictive nature. Despite this, there are advantages to using non-parametric tests.
Learn with 11 Non-Parametric Tests flashcards in the free StudySmarter app
Already have an account? Log in
Frequently Asked Questions about Non-Parametric Tests
What is a non-parametric test?
Non-parametric tests are also known as distribution-free tests. These are statistical tests that do not require normally-distributed data for the analysis.
When should non-parametric tests be used?
Non-parametric tests should be used when data is not normally distributed.
At least one of the assumptions of the parametric test has been violated.
Data is nominal or ordinal.
There are outliers in the data set.
The sample size is small.
What is the difference between non-parametric tests and parametric tests?
The difference between the two tests is that non-parametric tests use the median to measure the central tendency value for statistical analysis, whereas parametric tests utilise the mean.
Are parametric tests always more sensitive than non-parametric tests?
Non-parametric tests can also be very sensitive; the analysis accounts for outliers, and these increase the likelihood of having a Type 1 error, reducing the validity of the findings.
What is an example of a non-parametric statistical test?
The Kruskal-Wallis test. It compares the mean of two or more independent groups (uses a between-subject design). An example of research that could use the Kruskal-Wallis test is to measure the difference in average fitness scores of individuals who frequently exercise, moderately, or do not exercise.
About StudySmarter
StudySmarter is a globally recognized educational technology company, offering a holistic learning platform designed for students of all ages and educational levels. Our platform provides learning support for a wide range of subjects, including STEM, Social Sciences, and Languages and also helps students to successfully master various tests and exams worldwide, such as GCSE, A Level, SAT, ACT, Abitur, and more. We offer an extensive library of learning materials, including interactive flashcards, comprehensive textbook solutions, and detailed explanations. The cutting-edge technology and tools we provide help students create their own learning materials. StudySmarter’s content is not only expert-verified but also regularly updated to ensure accuracy and relevance.
Learn more