Assessment 3: Industrial Psychological Testing & Assessment
Question 1
Reliability in psychological testing means how consistent and stable a tool or test is
over time. For example, if you’re using a test to assess someone’s cognitive skills for
hiring, reliability means that the test should give similar results every time it’s used, no
matter when or where it's done.
The different types of reliability:
Test-Retest Reliability: This checks if the same test gives similar results when given to
the same group of people at two different times. For instance, Lerato could give a group
the same test now and again later, then compare the scores to see if they’re consistent.
Inter-item Consistency: This looks at how well the different parts of the test work
together. Lerato can use a calculation called Cronbach’s alpha to see if the questions
on the test are reliable.
To check if the new recruitment tool is reliable, Lerato can:
Do a Test-Retest Study: Give the test to a group, wait some time, then give them the
same test again. Compare the results to see if they are similar.
Calculate Internal Consistency: Use a computer program to calculate Cronbach’s
alpha. A higher score means the test questions are consistent with each other.
Create Parallel Forms: Make two versions of the test, give them to the same group, and
compare the results to see if both tests give similar scores.
Question 2
Validity means how well a test measures what it’s supposed to and shows the accuracy
of what is being measured. Validity depends on both the purpose of the test and the
group of people being tested. This means a test is only valid if it’s used for the right
reason and with the right group.
It’s important because if the test isn’t valid, the results could give a wrong idea about a
candidate’s abilities, leading to bad hiring choices.
, Lerato should look at these types of validity:
Content Validity: This checks if the test covers everything important. For example, if a
test is supposed to measure leadership, it should include different aspects of
leadership.
Construct Validity: This looks at whether the test actually measures what it says it does,
like intelligence or personality. It can be split into:
Convergent Validity: How much the test matches other tests that measure the
same thing.
Divergent Validity: How little the test matches tests that measure different
things.
Criterion-related Validity: This checks how well the test predicts results using another
measure. It includes:
Predictive Validity: How well the test predicts future job performance.
Concurrent Validity: How well the test compares to other tests taken at the same
time.
Question 3
Step 1 : Planning
Lerato should clearly outline the objectives.
Get a test-development team together.
Identify the traits or skills that the tool aims to measure.
Step 2 : Development
Review the sourced item.
Step 3 : Assembling and pre-testing
Arrange and finalize the tool and pre-test the prototype
Step 4 : Initial analysis
Establish discrimination and difficulty values
Inspect item bias
Step 5 : Revising
Revise test and content, then select items relevant for the standardisation
version.
Thereafter put together the final version.
Share the final version with a sample target population.
Step 6 : Evaluation
Check validity and reliability
Step 7 : Publish
Publish and market after a test manual has been designed.
Step 8 : Continues revision and updating is necessary.
Question 1
Reliability in psychological testing means how consistent and stable a tool or test is
over time. For example, if you’re using a test to assess someone’s cognitive skills for
hiring, reliability means that the test should give similar results every time it’s used, no
matter when or where it's done.
The different types of reliability:
Test-Retest Reliability: This checks if the same test gives similar results when given to
the same group of people at two different times. For instance, Lerato could give a group
the same test now and again later, then compare the scores to see if they’re consistent.
Inter-item Consistency: This looks at how well the different parts of the test work
together. Lerato can use a calculation called Cronbach’s alpha to see if the questions
on the test are reliable.
To check if the new recruitment tool is reliable, Lerato can:
Do a Test-Retest Study: Give the test to a group, wait some time, then give them the
same test again. Compare the results to see if they are similar.
Calculate Internal Consistency: Use a computer program to calculate Cronbach’s
alpha. A higher score means the test questions are consistent with each other.
Create Parallel Forms: Make two versions of the test, give them to the same group, and
compare the results to see if both tests give similar scores.
Question 2
Validity means how well a test measures what it’s supposed to and shows the accuracy
of what is being measured. Validity depends on both the purpose of the test and the
group of people being tested. This means a test is only valid if it’s used for the right
reason and with the right group.
It’s important because if the test isn’t valid, the results could give a wrong idea about a
candidate’s abilities, leading to bad hiring choices.
, Lerato should look at these types of validity:
Content Validity: This checks if the test covers everything important. For example, if a
test is supposed to measure leadership, it should include different aspects of
leadership.
Construct Validity: This looks at whether the test actually measures what it says it does,
like intelligence or personality. It can be split into:
Convergent Validity: How much the test matches other tests that measure the
same thing.
Divergent Validity: How little the test matches tests that measure different
things.
Criterion-related Validity: This checks how well the test predicts results using another
measure. It includes:
Predictive Validity: How well the test predicts future job performance.
Concurrent Validity: How well the test compares to other tests taken at the same
time.
Question 3
Step 1 : Planning
Lerato should clearly outline the objectives.
Get a test-development team together.
Identify the traits or skills that the tool aims to measure.
Step 2 : Development
Review the sourced item.
Step 3 : Assembling and pre-testing
Arrange and finalize the tool and pre-test the prototype
Step 4 : Initial analysis
Establish discrimination and difficulty values
Inspect item bias
Step 5 : Revising
Revise test and content, then select items relevant for the standardisation
version.
Thereafter put together the final version.
Share the final version with a sample target population.
Step 6 : Evaluation
Check validity and reliability
Step 7 : Publish
Publish and market after a test manual has been designed.
Step 8 : Continues revision and updating is necessary.