Experimental research methods
Lecture 1 2
Descriptive statistics 2
Hypothesis testing 2
Interval estimation 2
Testing means 2
Lecture 2 3
Power of a test 3
Effect size 3
One-way analysis of variance (ANOVA) 4
Logic ANOVA 4
Lecture 3 6
Statistical model of an ANOVA 6
Calculations for SS 7
F-statistic in an ANOVA 7
Statistical power of ANOVA 8
Lecture 4 8
Assumptions ANOVA 9
Testing contrasts 9
General idea: specifying contrasts 10
Procedure to determine contrast coefficients when n are equal 10
Specifying contrasts with unequal sample sizes 10
Experiment-wise type 1 error level 11
Lecture 5 12
Orthogonal contrasts 12
Helmert contrasts 12
Trend analysis 12
Post-hoc contrasts 13
Tukey 13
Scheffe 13
Lecture 6 15
Two-way ANOVA 15
Conduct using SPSS 16
Lecture 7 17
Simple effects 17
Balanced vs unbalanced designs 17
One way vs two way ANOVA 18
Lecture 8 19
ANCOVA 19
Reduction of error variance 20
Lecture 9 21
Assumptions ANCOVA 21
, ANCOVA summary 21
Repeated measures ANOVA 22
Lecture 10 24
Repeated measures ANOVA: assumptions 24
Repeated measures ANOVA: our standard steps 24
Another example of RM ANOVA but also with a between-subjects factor 25
Lecture 11 26
Goal experiments 26
Controlling for confounding variables 26
, Lecture 1
Descriptive statistics
Descriptive statistics help summarize the data (list of raw data is unclear). Two ways:
- With a distribution
- With sample statistics
Frequency distribution: data summarized by grouping data with the same score. Features
are central tendency (mean, median, mode) and dispersion (how much scores deviate from
the most characteristic score - range, variance, standard deviation).
Inferential statistics: draw conclusions about a population based on a sample.
Hypothesis testing
1. Formulate hypotheses: H0 and H1
2. Determine decision rule to decide when a result is statistically significant (a)
- a is often chosen as 0,05.
- p-value is the significant one-sample t-test
3. Determine p-value based on SPSS output
4. Decision on significance and conclusion
If the p-value is lower than a, you can reject H0.
If the p-value is larger than a, you do not reject H0 (accept H0).
One-sided testing: divide the two-sided sig. in SPSS to find the one-sided sig.
Point estimation: which value lies closest to the population value?
- In case of mean μ, the best guess is X.
- In case of variance (o2), the best guess is S2.
Interval estimation
95% confidence interval X + t x s/ 𝑁 .
If μH0 falls in the confidence interval, you can’t reject H0.
If μH0 does not fall in the confidence interval, you can reject H0.
Assume H0 is true, 95% of all samples will produce a CI in which μH0 falls, 5% will produce
a CI in which μH0 does not fall.
Testing means
5 different tests of means were discussed:
One population:
1. z-test
2. t-test
Two populations:
3. H0: μ1=μ2, 𝝈1=𝝈2 independent sample t-test
4. H0: μ1=μ2, 𝝈1≠𝝈2 independent sample t-test
5. H0: 8 = μ1-μ2 = 0, 𝝈D dependent samples t-test
Example: sig (two-tailed) = 0.105. 0.105>0.05(a) so H0 can’t be rejected.
Lecture 1 2
Descriptive statistics 2
Hypothesis testing 2
Interval estimation 2
Testing means 2
Lecture 2 3
Power of a test 3
Effect size 3
One-way analysis of variance (ANOVA) 4
Logic ANOVA 4
Lecture 3 6
Statistical model of an ANOVA 6
Calculations for SS 7
F-statistic in an ANOVA 7
Statistical power of ANOVA 8
Lecture 4 8
Assumptions ANOVA 9
Testing contrasts 9
General idea: specifying contrasts 10
Procedure to determine contrast coefficients when n are equal 10
Specifying contrasts with unequal sample sizes 10
Experiment-wise type 1 error level 11
Lecture 5 12
Orthogonal contrasts 12
Helmert contrasts 12
Trend analysis 12
Post-hoc contrasts 13
Tukey 13
Scheffe 13
Lecture 6 15
Two-way ANOVA 15
Conduct using SPSS 16
Lecture 7 17
Simple effects 17
Balanced vs unbalanced designs 17
One way vs two way ANOVA 18
Lecture 8 19
ANCOVA 19
Reduction of error variance 20
Lecture 9 21
Assumptions ANCOVA 21
, ANCOVA summary 21
Repeated measures ANOVA 22
Lecture 10 24
Repeated measures ANOVA: assumptions 24
Repeated measures ANOVA: our standard steps 24
Another example of RM ANOVA but also with a between-subjects factor 25
Lecture 11 26
Goal experiments 26
Controlling for confounding variables 26
, Lecture 1
Descriptive statistics
Descriptive statistics help summarize the data (list of raw data is unclear). Two ways:
- With a distribution
- With sample statistics
Frequency distribution: data summarized by grouping data with the same score. Features
are central tendency (mean, median, mode) and dispersion (how much scores deviate from
the most characteristic score - range, variance, standard deviation).
Inferential statistics: draw conclusions about a population based on a sample.
Hypothesis testing
1. Formulate hypotheses: H0 and H1
2. Determine decision rule to decide when a result is statistically significant (a)
- a is often chosen as 0,05.
- p-value is the significant one-sample t-test
3. Determine p-value based on SPSS output
4. Decision on significance and conclusion
If the p-value is lower than a, you can reject H0.
If the p-value is larger than a, you do not reject H0 (accept H0).
One-sided testing: divide the two-sided sig. in SPSS to find the one-sided sig.
Point estimation: which value lies closest to the population value?
- In case of mean μ, the best guess is X.
- In case of variance (o2), the best guess is S2.
Interval estimation
95% confidence interval X + t x s/ 𝑁 .
If μH0 falls in the confidence interval, you can’t reject H0.
If μH0 does not fall in the confidence interval, you can reject H0.
Assume H0 is true, 95% of all samples will produce a CI in which μH0 falls, 5% will produce
a CI in which μH0 does not fall.
Testing means
5 different tests of means were discussed:
One population:
1. z-test
2. t-test
Two populations:
3. H0: μ1=μ2, 𝝈1=𝝈2 independent sample t-test
4. H0: μ1=μ2, 𝝈1≠𝝈2 independent sample t-test
5. H0: 8 = μ1-μ2 = 0, 𝝈D dependent samples t-test
Example: sig (two-tailed) = 0.105. 0.105>0.05(a) so H0 can’t be rejected.