ISSUES OF RELIABILITY, VALIDITY +
SAMPLING
THE ISSUE OF RELIABILITY
EXPERIMENTAL RESEARCH -
In the context of an experiment, reliability refers to the ability
to repeat a study + obtain the same result (REPLICATION).
It is essential that all conditions are the same, otherwise any
change in the result may be due to changed conditions.
OBSERVATIONAL TECHNIQUES –
Observations should be consistent – ideally 2 or more
observers should produce the same record.
INTER-OBSERVER RELIABILITY = the extent to which the
observers agree.
→ calculated by dividing total agreements by the total number
of observations – a result of 0.80 or more suggests good inter-
observer reliability.
The reliability of observations can be improved through
training observers to use a coding system/behaviour checklist.
SELF-REPORT TECHNIQUES –
There are 2 different types of reliability which are particularly
apparent in self-report techniques such as questionnaires +
interviews.
INTERNAL RELIABILITY – a measure of the extent to
which something is consistent within itself – e.g. all the
questions in a test should be measuring the same thing.
EXTERNAL RELIABILITY – a measure of consistency
over several different occasions – e.g. if the same
interview is conducted with the same people (both
interviewer + interviewee), the outcome should be the
same, otherwise the interview is not reliable.
INTER-INTERVIEWER RELIABILITY – whether 2 interviewers produce
the same outcome.
There are various ways to assess reliability:
SPLIT-HALF METHOD – used to compare a person’s
performance on two halves of a questionnaire/test. If the test
is assessing the same thing in all its questions then there
should be a close correlation in the scores from both halves of
the test. This is a measure of internal reliability.
SAMPLING
THE ISSUE OF RELIABILITY
EXPERIMENTAL RESEARCH -
In the context of an experiment, reliability refers to the ability
to repeat a study + obtain the same result (REPLICATION).
It is essential that all conditions are the same, otherwise any
change in the result may be due to changed conditions.
OBSERVATIONAL TECHNIQUES –
Observations should be consistent – ideally 2 or more
observers should produce the same record.
INTER-OBSERVER RELIABILITY = the extent to which the
observers agree.
→ calculated by dividing total agreements by the total number
of observations – a result of 0.80 or more suggests good inter-
observer reliability.
The reliability of observations can be improved through
training observers to use a coding system/behaviour checklist.
SELF-REPORT TECHNIQUES –
There are 2 different types of reliability which are particularly
apparent in self-report techniques such as questionnaires +
interviews.
INTERNAL RELIABILITY – a measure of the extent to
which something is consistent within itself – e.g. all the
questions in a test should be measuring the same thing.
EXTERNAL RELIABILITY – a measure of consistency
over several different occasions – e.g. if the same
interview is conducted with the same people (both
interviewer + interviewee), the outcome should be the
same, otherwise the interview is not reliable.
INTER-INTERVIEWER RELIABILITY – whether 2 interviewers produce
the same outcome.
There are various ways to assess reliability:
SPLIT-HALF METHOD – used to compare a person’s
performance on two halves of a questionnaire/test. If the test
is assessing the same thing in all its questions then there
should be a close correlation in the scores from both halves of
the test. This is a measure of internal reliability.