Systematic Review Summary
HC1
Systematic reviews meticulously select, evaluate, and synthesize previous studies
on a given topic to theorize, generalize, inform practice, and steer future
research efforts.
Focus on design of studies instead of just the outcomes.
Pay attention to the use of instruments, inclusion and exclusion criteria being spelled
out, statistical combination, effect computations, being consistent, errors, sample bias
etc.
Generalizability: generalizing your results with your operationalization to the
broader construct. Or from sample to population or from.
One study will not be enough for generalizing; there will always be a random
sampling error and all studies have some imperfections. The smaller the n is in a
study, the bigger the sampling error.
Meta-information: determine effect, association, prevalence etc.
Steps to SR: choose subject; problem formulation; conduct the search (data
collection); outcome and effect sizes for relevant studies; and evaluate (main
outcome) and interpret (quality) studies; report SR.
, Errors in research:
Misconduct: data is fabricated or falsified.
Supoptimal design: non-functioning designs
HARKing: hypothesizing after research, so when the data is known. Now basing
your hypothesis on information you already have gathered from your data.
File drawer problem: lack of publication of useful studies. Mostly with studies
‘proving’ there is no effect. Shows bias in publication.
Overly positive reporting: selective outcome reporting. Only reporting the effect
that work in a study and ignore/hide the other outcomes. Can partly be avoided by
pre-regristering expectations. Conclusions on outcomes are most likely based on
theoretical information, therefore you can write down your expectations beforehand.
P-hacking: mis-using data analyses to find patterns in the data that are significant.
False positives. For example by steering results in a different way.
Or measuring additional variables and later using them as moderators/mediators. Or
use this to exclude participants.
Measuring the same dependent variable in different ways.
Exploratory research: testing on every possible outcomes
Confirmative: having certain expectations beforehand
HC1
Systematic reviews meticulously select, evaluate, and synthesize previous studies
on a given topic to theorize, generalize, inform practice, and steer future
research efforts.
Focus on design of studies instead of just the outcomes.
Pay attention to the use of instruments, inclusion and exclusion criteria being spelled
out, statistical combination, effect computations, being consistent, errors, sample bias
etc.
Generalizability: generalizing your results with your operationalization to the
broader construct. Or from sample to population or from.
One study will not be enough for generalizing; there will always be a random
sampling error and all studies have some imperfections. The smaller the n is in a
study, the bigger the sampling error.
Meta-information: determine effect, association, prevalence etc.
Steps to SR: choose subject; problem formulation; conduct the search (data
collection); outcome and effect sizes for relevant studies; and evaluate (main
outcome) and interpret (quality) studies; report SR.
, Errors in research:
Misconduct: data is fabricated or falsified.
Supoptimal design: non-functioning designs
HARKing: hypothesizing after research, so when the data is known. Now basing
your hypothesis on information you already have gathered from your data.
File drawer problem: lack of publication of useful studies. Mostly with studies
‘proving’ there is no effect. Shows bias in publication.
Overly positive reporting: selective outcome reporting. Only reporting the effect
that work in a study and ignore/hide the other outcomes. Can partly be avoided by
pre-regristering expectations. Conclusions on outcomes are most likely based on
theoretical information, therefore you can write down your expectations beforehand.
P-hacking: mis-using data analyses to find patterns in the data that are significant.
False positives. For example by steering results in a different way.
Or measuring additional variables and later using them as moderators/mediators. Or
use this to exclude participants.
Measuring the same dependent variable in different ways.
Exploratory research: testing on every possible outcomes
Confirmative: having certain expectations beforehand