PA literature:
There is evidence that people reading the same research paper but beginning with opposing
beliefs come away with even more polarized opinions than before they were exposed to the
new evidence. This can happen because people misinterpret the evidence they have read,
they are motivated to think in a certain way, or because of other cognitive malfunctions.
We use the phrase ability to think critically as a positive quality. The phrase refers to the skill
of thinking about an issue, analyzing it, looking at it from all sides, and weighing whether
there is sufficient evidence of good-enough quality to warrant making a reasoned judgment
that is as free of personal bias as possible.
Psychologists may spend their time in one of three ways:
- Doing research to generate knowledge
- Transmitting knowledge to others by teaching or by directing the research of others,
- Applying psychological knowledge in the form of clinical or consulting services
These can be done singly or in any combination. All psychologists must be consumers of
research. It stands to reason that they must be able to evaluate research critically. Having a
good grasp of the scientific method and the principles of research design, and knowing what
to look for, are invaluable in this pursuit.
Any critique of what is presented as a scientific study addresses the way the study does or
does not meet the scientific standards for evidence and proof. The critique focuses on the
scientific soundness, not on whether the findings conflict with your preexisting faith,
beliefs,or ideas about social acceptability and not on whether the results conflict with expert
opinion or clash with other methods of establishing truth and gathering evidence.
Even though it cannot provide more than probable truth and must be taken with
caution,empirical (and particularly experimental) research is still the best way to look for
answers to certain types of questions.
Critical reading requires a mental set of a particular kind. This mental set can be taught,
encouraged, and nurtured. Conversely, it can be discouraged or even forbidden. What is
involved is a kind of general open skepticism that enables you to bring a "show me the
evidence" attitude toward reading regardless of how authoritative the author may be or how
attractively the words are packaged.
You focus on each assertion that is written and contemplate the meaning behind it. You also
think about what was not written by the researchers and wonder why it is absent. There are
three outcomes after thinking critically:
- If everything meets the most stringent cognitive challenges, you come away enriched
and gratified
- You may accept some of it but have reservations about other parts
- You may reject it in its entirety and resent having spent the time on it
Uncritical acceptance of conclusions leads to the incorporation of misinformation into your
body of knowledge.
Digests and abstracts do serve a useful purpose. The best way for you to use them is as a
screening device for subject matter that may be of special interest, to be followed by critical
reading of the actual articles.
,The critical reader is interactive, actively anticipating what is to come. In scientific reading,
this anticipation involves the use of rules for conducting valid research. These rules are used
like yardsticks that are applied when they are relevant to the narrative about the study. The
critical reader then discovers whether these expectancies have been met:
- The research question guides the review of previous literature that sets the context
for the scientific report;
- The literature review and the statement of the problem relate clearly to the
hypotheses
- The hypotheses set up research design expectancies and suggest what variables
should be manipulated, measured, and/or controlled (so as not to bias the
research outcomes);
- The hypotheses, appropriate design, and type of data dictate the method of data
analysis
The analysis of the data influences the kinds of conclusions, inferences, and
generalizations that can be made
Articles in some journals have been pre-screened by referees and an editor. This is called
peer review. Peer review can be described as "a formal system whereby a piece of
academic work is scrutinized by people who were not involved in its creation but are
considered knowledgeable about the subject". Peer review is used to evaluate the quality of
reports submitted for Journal publication, proposals to present at professional meetings,
grant applications, reports, and book proposals and manuscripts.
There are many journals that do not perform peer reviews before they publish articles. Your
first obligation as a critical reader is to discover whether the journal conducts peer review.
No article that succeeds in being published is accompanied by a guarantee of excellence.
No research is perfect, nor does any research meet all the standards on the yardstick. Flaws
and weaknesses appear in all published articles.
The effectiveness of a reader is therefore dependent on knowledge of research design and
on skillful application of that knowledge.
In recent years, efforts have been made to improve the reporting of research in journal
articles. It used to be the case that limitations on the number of pages that could be printed
in a journal constrained the amount of information that any article could contain. Today, there
are journals that appear only online. Printed journals use online supplemental files to provide
additional information about the methods and results of the studies they publish. So,
limitations on the information researchers can provide about what they did have largely
disappeared.
,Possible flaws:
- An abbreviated summary or abstract of the study is presented at the beginning of the
journal article or research report. You should not ignore what is written here .
- Introduction → on reading the Introduction, you may discover no hypotheses are
explicitly stated. At other times you will not be sure what the hypotheses are,
assuming the researchers had any in mind when they started the research. In the
Discussion, however, the researchers are pleased to report that the results support
their hypothesis. The rationale for the originally nonexistent hypotheses is now given
and is illuminated by the results. You should assume that these were not prespecified
hypotheses but were post hoc, they were adopted after the results were seen.
- Method → the researchers may report that, during the research; the sampling plan
was altered or the procedures had to be modified for the study to be completed. The
changes, which clearly compromised the internal validity of the study, are not
mentioned as limitations when the researchers discuss the results.
- Results → the same result may be referred to several times in a research report. Be
careful to check that it is described with similar language each time
- Discussion → suppose that your reading of the Results section reveals that the data
seriously violate the assumptions of the statistical technique used in the study. The
researchers proceed without considering any alternatives, arguing that the technique
is "robust." Thereafter, the issue is completely disregarded, and there are no
reservations expressed in the Discussion section
Suppose a study used three measures of the dependent variable, all mentioned in the
Method section. Only one of them yields results in the predicted direction. There is no
advance reason given for you to believe that this measure was more important or more valid
than the others. Then, the researchers base the entire discussion and conclusions on the
one measure that came out as predicted, while giving short shrift to the other two. This is a
post hoc interpretation and should be treated as such.
- Researchers use four independent variables and 10 dependent variables, and obtain
four statistically significant results out of 40 comparisons tested. In the discussion,
the researchers dismiss the 36 that do not come out as expected and make much
ado about the four that did. This is something called data mining, fishing for
significance, or p- hacking. It capitalizes on chance findings.
- Measures are mentioned in the Method section but no mention is made of some of
them in Results and Discussion. This is called selective reporting or data censoring,
and it is a very serious matter.
Structure of a scientific paper:
Differences in the data processing that can lead to different results are:
- Assumptions
- Choices that are made
- Differences in models
- Differences in included covariates
, Literature lecture 1:
Problems of NHST:
The consequences of the misconceptions of NHST are that:
- Scientists overestimate the importance of their effects
- Scientists ignore effects that they falsely believe don’t exist because of ‘accepting the
null’
- Scientists pursue effects that they falsely believe exist because of ‘rejecting the null’
Science should be objective and it should be driven, above all else, by a desire to find out
truths about the world. It should not be self-serving. Unfortunately scientists compete for
scarce resources to do their work and it is easier to get these scarce resources if you're
‘successful’ and being ‘successful’ is tied up with NHST.
Incentive structures and publication bias:
Significant findings are 7 times more likely to be published than non-significant ones. This
phenomenon is called publication bias. In psychology 90% of journal articles report
significant results. This bias is driven partly by reviewers and editors rejecting articles with
non-significant results and partly by scientists not submitting articles with non-significant
results because they are aware of this editorial bias.
On top of this, researchers that find significant results have a better-looking track record and
therefore will be a strong candidate for jobs, research funding and internal promotion.
The current incentive structures in science are individualistic rather than collective.
Individuals are rewarded for ‘successful’ studies that can be published and can therefore
form the basis for funding applications. ‘Success’ is therefore defined largely by results being
significant. This might lead to feeling pressure to get significant results. It has been found
that those working in high-stress publish or perish environments less often have wrong
hypotheses possibly due to the incentive structure in academia that encourage people in
these environments to cheat more.
Other ways that scientists contribute to publication bias are:
1. Selectively reporting their results to focus on significant findings and exclude
non-significant findings. This could entail not including details of other experiments
that had results that contradict the significant findings
2. Researchers might capitalize on researcher degrees of freedom to show their
results in the most favorable light possible. Researcher degrees of freedom refers to
the fact that a scientist has many decisions to make when designing and analyzing a
study (e.g: same data to 29 research teams, 20 found a significant result, 9 did not).
These researcher degrees of freedom could be misused to for example excluded
cases to make a result significant.
For scientists reporting on their own behaviour, across all studies, on average 1,97%
admidded fabricating ro falsifying data or altering results to improve the outcome.
There were higher percentages or other questionable practices. There were higher
percentages for allowing industry funders to either write the first draft of the report
and influence when the study is terminated.
There were high rates of scientists responding that they were aware of others failing to
report contrary data in an article, choosing a statistical technique that provided a more
There is evidence that people reading the same research paper but beginning with opposing
beliefs come away with even more polarized opinions than before they were exposed to the
new evidence. This can happen because people misinterpret the evidence they have read,
they are motivated to think in a certain way, or because of other cognitive malfunctions.
We use the phrase ability to think critically as a positive quality. The phrase refers to the skill
of thinking about an issue, analyzing it, looking at it from all sides, and weighing whether
there is sufficient evidence of good-enough quality to warrant making a reasoned judgment
that is as free of personal bias as possible.
Psychologists may spend their time in one of three ways:
- Doing research to generate knowledge
- Transmitting knowledge to others by teaching or by directing the research of others,
- Applying psychological knowledge in the form of clinical or consulting services
These can be done singly or in any combination. All psychologists must be consumers of
research. It stands to reason that they must be able to evaluate research critically. Having a
good grasp of the scientific method and the principles of research design, and knowing what
to look for, are invaluable in this pursuit.
Any critique of what is presented as a scientific study addresses the way the study does or
does not meet the scientific standards for evidence and proof. The critique focuses on the
scientific soundness, not on whether the findings conflict with your preexisting faith,
beliefs,or ideas about social acceptability and not on whether the results conflict with expert
opinion or clash with other methods of establishing truth and gathering evidence.
Even though it cannot provide more than probable truth and must be taken with
caution,empirical (and particularly experimental) research is still the best way to look for
answers to certain types of questions.
Critical reading requires a mental set of a particular kind. This mental set can be taught,
encouraged, and nurtured. Conversely, it can be discouraged or even forbidden. What is
involved is a kind of general open skepticism that enables you to bring a "show me the
evidence" attitude toward reading regardless of how authoritative the author may be or how
attractively the words are packaged.
You focus on each assertion that is written and contemplate the meaning behind it. You also
think about what was not written by the researchers and wonder why it is absent. There are
three outcomes after thinking critically:
- If everything meets the most stringent cognitive challenges, you come away enriched
and gratified
- You may accept some of it but have reservations about other parts
- You may reject it in its entirety and resent having spent the time on it
Uncritical acceptance of conclusions leads to the incorporation of misinformation into your
body of knowledge.
Digests and abstracts do serve a useful purpose. The best way for you to use them is as a
screening device for subject matter that may be of special interest, to be followed by critical
reading of the actual articles.
,The critical reader is interactive, actively anticipating what is to come. In scientific reading,
this anticipation involves the use of rules for conducting valid research. These rules are used
like yardsticks that are applied when they are relevant to the narrative about the study. The
critical reader then discovers whether these expectancies have been met:
- The research question guides the review of previous literature that sets the context
for the scientific report;
- The literature review and the statement of the problem relate clearly to the
hypotheses
- The hypotheses set up research design expectancies and suggest what variables
should be manipulated, measured, and/or controlled (so as not to bias the
research outcomes);
- The hypotheses, appropriate design, and type of data dictate the method of data
analysis
The analysis of the data influences the kinds of conclusions, inferences, and
generalizations that can be made
Articles in some journals have been pre-screened by referees and an editor. This is called
peer review. Peer review can be described as "a formal system whereby a piece of
academic work is scrutinized by people who were not involved in its creation but are
considered knowledgeable about the subject". Peer review is used to evaluate the quality of
reports submitted for Journal publication, proposals to present at professional meetings,
grant applications, reports, and book proposals and manuscripts.
There are many journals that do not perform peer reviews before they publish articles. Your
first obligation as a critical reader is to discover whether the journal conducts peer review.
No article that succeeds in being published is accompanied by a guarantee of excellence.
No research is perfect, nor does any research meet all the standards on the yardstick. Flaws
and weaknesses appear in all published articles.
The effectiveness of a reader is therefore dependent on knowledge of research design and
on skillful application of that knowledge.
In recent years, efforts have been made to improve the reporting of research in journal
articles. It used to be the case that limitations on the number of pages that could be printed
in a journal constrained the amount of information that any article could contain. Today, there
are journals that appear only online. Printed journals use online supplemental files to provide
additional information about the methods and results of the studies they publish. So,
limitations on the information researchers can provide about what they did have largely
disappeared.
,Possible flaws:
- An abbreviated summary or abstract of the study is presented at the beginning of the
journal article or research report. You should not ignore what is written here .
- Introduction → on reading the Introduction, you may discover no hypotheses are
explicitly stated. At other times you will not be sure what the hypotheses are,
assuming the researchers had any in mind when they started the research. In the
Discussion, however, the researchers are pleased to report that the results support
their hypothesis. The rationale for the originally nonexistent hypotheses is now given
and is illuminated by the results. You should assume that these were not prespecified
hypotheses but were post hoc, they were adopted after the results were seen.
- Method → the researchers may report that, during the research; the sampling plan
was altered or the procedures had to be modified for the study to be completed. The
changes, which clearly compromised the internal validity of the study, are not
mentioned as limitations when the researchers discuss the results.
- Results → the same result may be referred to several times in a research report. Be
careful to check that it is described with similar language each time
- Discussion → suppose that your reading of the Results section reveals that the data
seriously violate the assumptions of the statistical technique used in the study. The
researchers proceed without considering any alternatives, arguing that the technique
is "robust." Thereafter, the issue is completely disregarded, and there are no
reservations expressed in the Discussion section
Suppose a study used three measures of the dependent variable, all mentioned in the
Method section. Only one of them yields results in the predicted direction. There is no
advance reason given for you to believe that this measure was more important or more valid
than the others. Then, the researchers base the entire discussion and conclusions on the
one measure that came out as predicted, while giving short shrift to the other two. This is a
post hoc interpretation and should be treated as such.
- Researchers use four independent variables and 10 dependent variables, and obtain
four statistically significant results out of 40 comparisons tested. In the discussion,
the researchers dismiss the 36 that do not come out as expected and make much
ado about the four that did. This is something called data mining, fishing for
significance, or p- hacking. It capitalizes on chance findings.
- Measures are mentioned in the Method section but no mention is made of some of
them in Results and Discussion. This is called selective reporting or data censoring,
and it is a very serious matter.
Structure of a scientific paper:
Differences in the data processing that can lead to different results are:
- Assumptions
- Choices that are made
- Differences in models
- Differences in included covariates
, Literature lecture 1:
Problems of NHST:
The consequences of the misconceptions of NHST are that:
- Scientists overestimate the importance of their effects
- Scientists ignore effects that they falsely believe don’t exist because of ‘accepting the
null’
- Scientists pursue effects that they falsely believe exist because of ‘rejecting the null’
Science should be objective and it should be driven, above all else, by a desire to find out
truths about the world. It should not be self-serving. Unfortunately scientists compete for
scarce resources to do their work and it is easier to get these scarce resources if you're
‘successful’ and being ‘successful’ is tied up with NHST.
Incentive structures and publication bias:
Significant findings are 7 times more likely to be published than non-significant ones. This
phenomenon is called publication bias. In psychology 90% of journal articles report
significant results. This bias is driven partly by reviewers and editors rejecting articles with
non-significant results and partly by scientists not submitting articles with non-significant
results because they are aware of this editorial bias.
On top of this, researchers that find significant results have a better-looking track record and
therefore will be a strong candidate for jobs, research funding and internal promotion.
The current incentive structures in science are individualistic rather than collective.
Individuals are rewarded for ‘successful’ studies that can be published and can therefore
form the basis for funding applications. ‘Success’ is therefore defined largely by results being
significant. This might lead to feeling pressure to get significant results. It has been found
that those working in high-stress publish or perish environments less often have wrong
hypotheses possibly due to the incentive structure in academia that encourage people in
these environments to cheat more.
Other ways that scientists contribute to publication bias are:
1. Selectively reporting their results to focus on significant findings and exclude
non-significant findings. This could entail not including details of other experiments
that had results that contradict the significant findings
2. Researchers might capitalize on researcher degrees of freedom to show their
results in the most favorable light possible. Researcher degrees of freedom refers to
the fact that a scientist has many decisions to make when designing and analyzing a
study (e.g: same data to 29 research teams, 20 found a significant result, 9 did not).
These researcher degrees of freedom could be misused to for example excluded
cases to make a result significant.
For scientists reporting on their own behaviour, across all studies, on average 1,97%
admidded fabricating ro falsifying data or altering results to improve the outcome.
There were higher percentages or other questionable practices. There were higher
percentages for allowing industry funders to either write the first draft of the report
and influence when the study is terminated.
There were high rates of scientists responding that they were aware of others failing to
report contrary data in an article, choosing a statistical technique that provided a more