Course 1.7 – Organizational psychology
Problem 2. Selective selection
Evaluation of selection
Criterion: Good employee performance
Predictor: Anything assesed in a job applicant related to the criterion
Validation study: Study that determines how related is the predictor with the criterion. This
process involves 5 steps:
1. Conduct a job analysis. ( analysis of the components/tasks of a job and the KSAO’s
that a person should have to succed in this tasks)
2. Specify job performance criteria.
3. Choose predictors. (selection methods e.g. interview, psychological tests etc.)
4. Validate the predictors. ( criterion related validity)
5. Cross-validate. (replicate results of one sample to another sample)
Validity generalization: If a test is valid in one setting it is possible that is valid in another
organization or job position.
, Course 1.7 – Organizational psychology
Problem 2. Selective selection
Predictor Assessment
Multiple hurdles: Set a passing score for a predictor and use multiple hurdles for the
different predictors. e.g. degree in computers of the applicant pass the hurdle of
computer knowledge, another hurdle in use for social skills
Regression analysis: The regression approach uses the score from each predictor in
an equation to provide a numerical estimate or forecast of the criterion
Validity
Construct validity: There is confidence on the interpretation of what the measure
represents. So, that the measure accurately represents what it is supposed to
measure.
Criterion related validity: Measure that relates to what is expected. Correlating test
score with performance. E.g. score on intelligence tests related to how well they
perform in a certain task.
o Concurrent validity: addresses whether a measure accurately and
completely represents actual job performance (data for criterion and
predictor comes from current employees-same time)
o Predictive validity: addresses weather a measure represents future job
performance (predictors measured before criterion, selection is made and
then comparison between crietrion and predictors)
Face validity: A measure that appears to assess what it was designed to assess. E.g.
do you like your job?
Content validity: A multiple-scale measure of a variable that covers the whole
domain of a variable. No calculating coefficients
Faith validity: organisations might believe that a selection method (e.g. a
psychometric ability test) is valid because a reputable company sells it, and it is
packaged in a very expensive-looking way
Incremental validity: whether a new element of the tests adds more validity to the
test or not
Reliability
Reliability: To what extent the tool used for the assesment is consistent under varying
conditions.
Test-retest reliability: Administer the same test more than once to the same
candidate in different times. The scores should be highly correlated to consider the
measure reliable.
Parallel forms: Similar tests conducted by creators on the same ability to assess
external reliability. If it is high there should be a strong correlation between the two
scores obtained in the tests.
o Both of them are forms of external reliability (comparison with reference
point)
Internal reliability: The different parts of same measure produce same outcome.
Problem 2. Selective selection
Evaluation of selection
Criterion: Good employee performance
Predictor: Anything assesed in a job applicant related to the criterion
Validation study: Study that determines how related is the predictor with the criterion. This
process involves 5 steps:
1. Conduct a job analysis. ( analysis of the components/tasks of a job and the KSAO’s
that a person should have to succed in this tasks)
2. Specify job performance criteria.
3. Choose predictors. (selection methods e.g. interview, psychological tests etc.)
4. Validate the predictors. ( criterion related validity)
5. Cross-validate. (replicate results of one sample to another sample)
Validity generalization: If a test is valid in one setting it is possible that is valid in another
organization or job position.
, Course 1.7 – Organizational psychology
Problem 2. Selective selection
Predictor Assessment
Multiple hurdles: Set a passing score for a predictor and use multiple hurdles for the
different predictors. e.g. degree in computers of the applicant pass the hurdle of
computer knowledge, another hurdle in use for social skills
Regression analysis: The regression approach uses the score from each predictor in
an equation to provide a numerical estimate or forecast of the criterion
Validity
Construct validity: There is confidence on the interpretation of what the measure
represents. So, that the measure accurately represents what it is supposed to
measure.
Criterion related validity: Measure that relates to what is expected. Correlating test
score with performance. E.g. score on intelligence tests related to how well they
perform in a certain task.
o Concurrent validity: addresses whether a measure accurately and
completely represents actual job performance (data for criterion and
predictor comes from current employees-same time)
o Predictive validity: addresses weather a measure represents future job
performance (predictors measured before criterion, selection is made and
then comparison between crietrion and predictors)
Face validity: A measure that appears to assess what it was designed to assess. E.g.
do you like your job?
Content validity: A multiple-scale measure of a variable that covers the whole
domain of a variable. No calculating coefficients
Faith validity: organisations might believe that a selection method (e.g. a
psychometric ability test) is valid because a reputable company sells it, and it is
packaged in a very expensive-looking way
Incremental validity: whether a new element of the tests adds more validity to the
test or not
Reliability
Reliability: To what extent the tool used for the assesment is consistent under varying
conditions.
Test-retest reliability: Administer the same test more than once to the same
candidate in different times. The scores should be highly correlated to consider the
measure reliable.
Parallel forms: Similar tests conducted by creators on the same ability to assess
external reliability. If it is high there should be a strong correlation between the two
scores obtained in the tests.
o Both of them are forms of external reliability (comparison with reference
point)
Internal reliability: The different parts of same measure produce same outcome.