- Maximum cumulative percentage (p.10)
Δp(x) = |cp1(x) – cp2(x)| The highest value of ∆cp(x) = ∆ (p.10)
- Boxplot (p. 11 - 13)
- Arithmetic mean (p. 14)
x_ = Σscores / # scores x / n
- Deviation (p.15,16)
- Variance (p. 16)
Var = ss/df
ss = sum of squared deviations
df = degrees of freedom (n-1 in sample)
- Standard deviation (p.17)
s = √var
- Effect size D (p.18, 41)
D=
- Pooled standard deviation Spool (p.19, 41)
- Outliers (p.20 - 21)
How many times the standard deviation is removed from the mean
A z-value above 3 = outlier
- Δ VS. D (p. 22)
Practical 2a: Introduction to SPSS / descriptive statistics
- S²pool (p.41)
((st. deviation² * n-1) + (st. deviation² * n-1)) / (n-1)
- Eta² (p. 41, 47, 95, 100)
SS between / SStotal
The proportion of variation in Y explained by x (p.48)
- D with eta (p.41, 49)
D = (2 * eta) / (√1-eta² )
- R² from ANOVA sum of squares (p.43)
R² SSl / SSd
The proportion of linearly explained variation (p.51)
Practical 2b: Proportion of explained variation
- Stepwise calculation eta² (p.47)
- SSe (p.48, 52)
- Changes in eta² (p.49)
- Relationship D with eta (p. 49)
- Residual and linear part (p.51,52)
d and d², e and e² and l and l² to determine R²
, Practical 3: Theory of estimates and testing
- Sample vs. population (p.55)
Population mean (μ) = sample mean (m)
Population proportion (π) = sample proportion (p)
Population standard deviation (σ) = sample standard deviation (s)
Population correlation (ρ) =sample correlation (r)
- Normal distribution and Z-score (p.57)
- Central Limit theorem for probability distribution of a sample mean (M) (p.58)
Mean = Expectation =Em= μ (p.59)
Standard deviation = standard error to -> σ = SEm =σ / √n
Finite population N, corrected by multiplying with: √[(N-n)/(N-1)]
Shape probability = shape normal distribution, unless a small sample
Sample variance: Es² = σ² (p.60)
- Estimation interval for μ if σ is known (p.61)
m – 1.96 /n < 𝝁̂ < m + 1.96 /n
- Estimation interval for μ if σ is unknown (p.61)
m – t.025 s/n < 𝝁̂ < m + t.025 s/n
Probability distribution -> T = (M -μ )/(S/n)
Normal distribution -> Z = (M - μ )/(σ /n)
- Student probability distribution (p. 63)
similar to a normal distribution, symmetrically around 0, and both bell-shaped
difference is that the student distribution is more dispersed, but how much
depends on df, higher sample size = more like a normal distribution
- Testing with Exceedance probability / Probability theory (p.65)
- Testing with critical value (p.66)
- Types of Hypotheses (p. 67)
- Testing with Confidence interval (p.69)
Practical 4: Differences between two groups
- Flow chart (p. 74)