Specific track Clinical Clip 1
Power and effect size & Randomization + manipulation check
Effect size (beta) = Objective and standardized measure of the size of an observed effect (proportion
of total variance in the dependent variable that is associated with an independent variable)
- partial eta squared in an ANOVA:
small effect .01, medium effect .06, large effect > of gelijk aan .14
- Pearson r in a regression: size of a correlation
small effect .10, medium effect .30, large effect > of gelijk aan .50
When partial eta squared is larger: more relevant result
Statistical power = possibility of detecting an existing effect of a particular size, therefore the
opportunity to correctly reject the null hypothesis. The power is 1 – beta, where beta is the probability
of a type 2 error.
- Your goal is to get as much data as necessary to have a power of at least 0.80.
If your sample is large and your effect size is small, you can reduce the sample to increase the effect
size.
Type 1 error (alfa) = the probability that an effect will be detected where in fact no effect exists: the
null hypothesis (H0) is rejected when in fact it is true.
Type 2 error (beta) = the probability that no effect will be detected where an effect does in fact
exists: the null hypothesis (H0) is not rejected when in fact it is false -> chance of false negative
- the possibility to detect an effect (the power) depends on the p-value that is chosen (so the chance for
a type 1 error). It also depends on the effect size and the type 2 error.
.
, Experimental research: Objective = establish causal relationships
Only variable A -> causes change in variable B
Three relevant comparisons:
- Comparison 1: Baseline comparison/measurement with the control group to determine whether your
experimental groups differ -> they should not differ
Randomization = you check whether the participants in your study are allocated to a group or
condition at random or whether this was determined by the researcher.
- Comparison 2: Adresses variable A (the factor that you manipulated and not just measure) -> you use
a manipulation test to check whether factor A was manipulated by the researcher: post-manipulation
measurement for all groups/
- Comparison 3: The actual effect you’re looking for (this can be done with the number of
measurements you did) ->
pre-post measurement = compare an outcome before an intervention and after an intervention.
This would be a longitudinal design: strongest design to determine causation. Or you can measure at
one time point (cross-sectional design in which you compare your experimental/intervention group
with a control group)
Randomization check = are the groups equal prior to the manipulation?
- not significant: excellent
- significant: include covariates (explore the influence)
Manipulation check = Was your manipulation of factor A successful?
- significant: excellent
- not significant: you cannot conclude anything about the effect of the manipulation.
Power and effect size & Randomization + manipulation check
Effect size (beta) = Objective and standardized measure of the size of an observed effect (proportion
of total variance in the dependent variable that is associated with an independent variable)
- partial eta squared in an ANOVA:
small effect .01, medium effect .06, large effect > of gelijk aan .14
- Pearson r in a regression: size of a correlation
small effect .10, medium effect .30, large effect > of gelijk aan .50
When partial eta squared is larger: more relevant result
Statistical power = possibility of detecting an existing effect of a particular size, therefore the
opportunity to correctly reject the null hypothesis. The power is 1 – beta, where beta is the probability
of a type 2 error.
- Your goal is to get as much data as necessary to have a power of at least 0.80.
If your sample is large and your effect size is small, you can reduce the sample to increase the effect
size.
Type 1 error (alfa) = the probability that an effect will be detected where in fact no effect exists: the
null hypothesis (H0) is rejected when in fact it is true.
Type 2 error (beta) = the probability that no effect will be detected where an effect does in fact
exists: the null hypothesis (H0) is not rejected when in fact it is false -> chance of false negative
- the possibility to detect an effect (the power) depends on the p-value that is chosen (so the chance for
a type 1 error). It also depends on the effect size and the type 2 error.
.
, Experimental research: Objective = establish causal relationships
Only variable A -> causes change in variable B
Three relevant comparisons:
- Comparison 1: Baseline comparison/measurement with the control group to determine whether your
experimental groups differ -> they should not differ
Randomization = you check whether the participants in your study are allocated to a group or
condition at random or whether this was determined by the researcher.
- Comparison 2: Adresses variable A (the factor that you manipulated and not just measure) -> you use
a manipulation test to check whether factor A was manipulated by the researcher: post-manipulation
measurement for all groups/
- Comparison 3: The actual effect you’re looking for (this can be done with the number of
measurements you did) ->
pre-post measurement = compare an outcome before an intervention and after an intervention.
This would be a longitudinal design: strongest design to determine causation. Or you can measure at
one time point (cross-sectional design in which you compare your experimental/intervention group
with a control group)
Randomization check = are the groups equal prior to the manipulation?
- not significant: excellent
- significant: include covariates (explore the influence)
Manipulation check = Was your manipulation of factor A successful?
- significant: excellent
- not significant: you cannot conclude anything about the effect of the manipulation.