Qualitative and Mixed Research methods
Problems in Quantitative Psychology
Quantitative approaches: measuring and converting things into numbers and statistical tests
Only ‘allows’ numerical information in order to remain objective
used to keep human intuition, which is prone to bias, out of the process of testing theories
Qualitative approaches: using words —> extracting meaning from verbal data
Replication crisis: the assumption that a significant result is a true result
Brian Nosek and estimating the reproducibility of psychological sciences —> found that only 1/3 to 1/2
of the studies produced significant results
False positives: published research with results that are deemed significant when they are not —>
eventually gets weeded out through natural replication and further research
Big surprise how over 50% of replicated results are false positives
Scientific method: prevents the problem of confirmation bias
No “one” specific method —> different disciplines and different researchers approach things
differently
Null hypothesis significant testing (NHST): construct hypothesis about some effect in the world we
would expect to occur under certain circumstances if the theory were true
Contriving a situation which will produce unambiguous evidence for or against the theory
Problem with crucial experiment: experiments don’t prove the theory wrong, it only provides
probabilistic evidence
probabilistic= ambiguous —> still susceptible to confirmation bias
Neyman and Pearson: the creation of the p values should provide an unambiguous process, creating
unambiguous results
People don’t follow the rules —> higher false positive rate + confirmation bias is still a problem
Four horsemen of the reproducibility apocalypse (Dorothy Bishop)
1. Publication bias: significant findings are more likely to get published resulting in the bottom drawer
effect
Bottom drawer effect: chucking non significant findings in a bottom drawer
, 2. P-hacking: finding scientific misdemeanors in order to produce a significant result; selective reporting
3. HARKing: looking at results, then writing hypothesis
Solutions to preventing replication crisis:
1. Open Science —> pre register plan
2. Using p value as a continuous measure of evidence against the null —> gets rid of the incentive of
having to decide whether a result is significant or not
Experiments are only a small part of science; they are not the end-all for a theory —> you only have
some idea of whether a theory is true or not
The Qualitative Approach
Qualitative Approach: drawing on fundamental assumptions and beliefs and knowledge to interpret ambiguous information
Methods to maintain objectivity in qualitative research:
1. Reflexivity: maintaining mindful awareness of own interpretations when interacting with data —>
process for spotting biases
Source: Cohen & Crabtree, 2006; Hsiun 2010
2. Positionality: being personally aware of your position or biases on a topic
Source: Holmes, 2014; Malterud, 2001, p. 483-484
3. Qualitative researchers approach other papers with a more critical attitude —> consider researchers’
position, tone, and whether or not they approached the topic in an objective way
There is no ‘result’ in qualitative research
Qualitative alternatives to quantitative weakness:
1. Problem of validity: to what extent is your measurement tool actually measure what you intend it to
measure?
Unable to answer this question in psychology —> must use indirect ways of assessing validity
Correlation values are quite low due to nuisance (confounding) variables
Statistics are being run on ‘weak’ measures
, Statistics are being run on weak measures
In qualitative research, data might not be an accurate representation, but is still much richer
2. Reliability: the degree to which the validity of the tool remains constant across individuals, time and
context
Pragmatic limitation on quantitative psychology: tend to study the effect of one factor in a precise
way in a single experiment
Qualitative psychology embraces all these factors
3. Confounds: explanations of experimental finding other than that the theory is true —> weakens
experiment
P value assumes the measure is perfect
Source: Hubbard, Haig, Parsa
Qualitative solution: abandon experimental format
4. Pragmatic/ethic
Quantitative data can’t measure phenomena directly —> too difficult to boil down to a number
Qualitative data don’t aim to be so precise in the first place —> less problems
Thematic Analysis
Small q TA: discovering themes that already exit within a dataset, or finding evidence for themes that pre-
exist the data
Accept that there is potential for bias to influence our findings
Source: Boyatzis, 1998; Guest et al., 2012; Joffe, 2012)
Big Q TA: analysis becomes a creative rather than a technical process —> result of the researcher’s
engagement with the dataset and the application of their analytical skills and experiences, and personal
and conceptual standpoints
Idea of objective real findings is given up
Source: Braun and Clarke, 2006; Langridge, 2004
Continuum of Thematic analysis:
Problems in Quantitative Psychology
Quantitative approaches: measuring and converting things into numbers and statistical tests
Only ‘allows’ numerical information in order to remain objective
used to keep human intuition, which is prone to bias, out of the process of testing theories
Qualitative approaches: using words —> extracting meaning from verbal data
Replication crisis: the assumption that a significant result is a true result
Brian Nosek and estimating the reproducibility of psychological sciences —> found that only 1/3 to 1/2
of the studies produced significant results
False positives: published research with results that are deemed significant when they are not —>
eventually gets weeded out through natural replication and further research
Big surprise how over 50% of replicated results are false positives
Scientific method: prevents the problem of confirmation bias
No “one” specific method —> different disciplines and different researchers approach things
differently
Null hypothesis significant testing (NHST): construct hypothesis about some effect in the world we
would expect to occur under certain circumstances if the theory were true
Contriving a situation which will produce unambiguous evidence for or against the theory
Problem with crucial experiment: experiments don’t prove the theory wrong, it only provides
probabilistic evidence
probabilistic= ambiguous —> still susceptible to confirmation bias
Neyman and Pearson: the creation of the p values should provide an unambiguous process, creating
unambiguous results
People don’t follow the rules —> higher false positive rate + confirmation bias is still a problem
Four horsemen of the reproducibility apocalypse (Dorothy Bishop)
1. Publication bias: significant findings are more likely to get published resulting in the bottom drawer
effect
Bottom drawer effect: chucking non significant findings in a bottom drawer
, 2. P-hacking: finding scientific misdemeanors in order to produce a significant result; selective reporting
3. HARKing: looking at results, then writing hypothesis
Solutions to preventing replication crisis:
1. Open Science —> pre register plan
2. Using p value as a continuous measure of evidence against the null —> gets rid of the incentive of
having to decide whether a result is significant or not
Experiments are only a small part of science; they are not the end-all for a theory —> you only have
some idea of whether a theory is true or not
The Qualitative Approach
Qualitative Approach: drawing on fundamental assumptions and beliefs and knowledge to interpret ambiguous information
Methods to maintain objectivity in qualitative research:
1. Reflexivity: maintaining mindful awareness of own interpretations when interacting with data —>
process for spotting biases
Source: Cohen & Crabtree, 2006; Hsiun 2010
2. Positionality: being personally aware of your position or biases on a topic
Source: Holmes, 2014; Malterud, 2001, p. 483-484
3. Qualitative researchers approach other papers with a more critical attitude —> consider researchers’
position, tone, and whether or not they approached the topic in an objective way
There is no ‘result’ in qualitative research
Qualitative alternatives to quantitative weakness:
1. Problem of validity: to what extent is your measurement tool actually measure what you intend it to
measure?
Unable to answer this question in psychology —> must use indirect ways of assessing validity
Correlation values are quite low due to nuisance (confounding) variables
Statistics are being run on ‘weak’ measures
, Statistics are being run on weak measures
In qualitative research, data might not be an accurate representation, but is still much richer
2. Reliability: the degree to which the validity of the tool remains constant across individuals, time and
context
Pragmatic limitation on quantitative psychology: tend to study the effect of one factor in a precise
way in a single experiment
Qualitative psychology embraces all these factors
3. Confounds: explanations of experimental finding other than that the theory is true —> weakens
experiment
P value assumes the measure is perfect
Source: Hubbard, Haig, Parsa
Qualitative solution: abandon experimental format
4. Pragmatic/ethic
Quantitative data can’t measure phenomena directly —> too difficult to boil down to a number
Qualitative data don’t aim to be so precise in the first place —> less problems
Thematic Analysis
Small q TA: discovering themes that already exit within a dataset, or finding evidence for themes that pre-
exist the data
Accept that there is potential for bias to influence our findings
Source: Boyatzis, 1998; Guest et al., 2012; Joffe, 2012)
Big Q TA: analysis becomes a creative rather than a technical process —> result of the researcher’s
engagement with the dataset and the application of their analytical skills and experiences, and personal
and conceptual standpoints
Idea of objective real findings is given up
Source: Braun and Clarke, 2006; Langridge, 2004
Continuum of Thematic analysis: