Learning OBJECTIVE ASSESSMENT ACTUAL EXAM
2025-2026 QUESTIONS AND CORRECT DETAILED
ANSWERS WITH RATIONALES || 100%
GUARANTEED PASS <RECENT VERSION>
Assessment Foundations and Purposes
1. A teacher uses an end-of-unit test to determine if students have mastered the state
standards for that unit. What is the primary purpose of this assessment?
A) Formative
B) Diagnostic
C) Summative ✓
D) Benchmark
Rationale: Summative assessments are used to evaluate student learning at the end of an
instructional period, such as a unit, and are compared against a standard or benchmark.
2. Which assessment is BEST for identifying specific student misconceptions or skill gaps at the
beginning of a new instructional unit?
A) Summative assessment
B) Formative assessment
C) Diagnostic assessment ✓
D) Performance assessment
Rationale: Diagnostic assessments are given before instruction to identify students' prior
knowledge, strengths, weaknesses, and misconceptions.
3. As students are working on practice problems, a teacher circulates the room, providing
immediate feedback. This is an example of:
A) Assessment for learning. ✓
B) Assessment of learning.
C) Assessment as learning.
D) Norm-referenced assessment.
Rationale: Assessment for learning (formative) is used during instruction to guide teaching and
provide feedback to improve learning.
,4. A student reflects on their own essay draft using a rubric and identifies areas for revision.
This process is best described as:
A) Assessment for learning.
B) Assessment of learning.
C) Assessment as learning. ✓
D) Ipsative assessment.
Rationale: Assessment as learning involves students metacognitively monitoring their own
learning and making adjustments. The student is the active agent in the assessment process.
5. A school district administers a test to all 3rd graders to predict their performance on the
state's annual standardized test. This is known as a(n):
A) Diagnostic assessment
B) Aptitude test
C) Benchmark assessment ✓
D) Criterion-referenced test
Rationale: Benchmark assessments are interim tests given periodically to predict student
performance on summative, high-stakes tests and to gauge the effectiveness of instruction.
6. The primary purpose of a formative assessment is to:
A) Assign a final grade.
B) Provide a basis for student ranking.
C) Inform and adjust instruction. ✓
D) Report to parents on student achievement.
Rationale: Formative assessment is a tool for teachers to check for understanding and for
students to gauge their progress, allowing for instructional adjustments.
7. A norm-referenced score tells you:
A) Whether a student met a predefined standard.
B) How a student performed relative to a national sample of peers. ✓
C) The specific skills a student has mastered.
D) The percentage of correct answers a student achieved.
Rationale: Norm-referenced tests compare a student's performance to a normative group, often
resulting in percentiles or stanines.
8. A criterion-referenced score is most useful for determining:
A) If a student is in the top 10% of their class.
B) Mastery of specific instructional objectives. ✓
C) A student's innate ability or aptitude.
D) How a school compares to the national average.
,Rationale: Criterion-referenced tests measure a student's performance against a fixed set of
predetermined criteria or learning standards.
Validity and Reliability
9. A math test for 5th graders contains many word problems with complex, unfamiliar
vocabulary. This test is likely low in:
A) Reliability
B) Construct validity
C) Content validity
D) Face validity ✓
Rationale: While it may also affect construct validity, the immediate issue is face validity—it
doesn't appear to be a fair test of math ability to a reasonable observer because it's measuring
reading comprehension.
10. If students receive vastly different scores on two different versions of the same test, the
test may be lacking in:
A) Validity
B) Alternate-form reliability ✓
C) Inter-rater reliability
D) Content validity
Rationale: Alternate-form reliability refers to the consistency of results between two different
but equivalent versions of a test.
11. A history test that only asks about dates and names, but ignores historical concepts and
cause-effect relationships, lacks:
A) Predictive validity
B) Content validity ✓
C) Test-retest reliability
D) Face validity
Rationale: Content validity ensures the test adequately covers all aspects of the domain it's
intended to measure. This test misses key concepts.
12. Two teachers score the same set of essays and give very different grades. This assessment
has a problem with:
A) Construct validity
B) Internal consistency
C) Inter-rater reliability ✓
D) Test-retest reliability
, Rationale: Inter-rater reliability is the degree of agreement between two or more scorers. A
clear rubric is needed to improve this.
13. A high school entrance exam that accurately predicts which students will succeed in their
first year has high:
A) Content validity
B) Concurrent validity
C) Predictive validity ✓
D) Construct validity
Rationale: Predictive validity is the extent to which a score on an assessment forecasts
performance on a future, related measure.
14. A test yields consistent results when administered multiple times to the same students
over a short period. This test is high in:
A) Validity
B) Test-retest reliability ✓
C) Internal consistency
D) Construct validity
Rationale: Test-retest reliability measures the stability of scores over time.
15. A test designed to measure "scientific reasoning" actually only measures factual recall of
scientific terms. This test is low in:
A) Content validity
B) Construct validity ✓
C) Face validity
D) Criterion-related validity
Rationale: Construct validity is the degree to which a test measures the underlying theoretical
construct (e.g., "scientific reasoning") it claims to measure.
16. A single test question is not a reliable indicator of a student's knowledge because:
A) It lacks validity.
B) A single data point is unreliable. ✓
C) It cannot be norm-referenced.
D) It is always too easy.
Rationale: Reliability increases with the number of items. A single item provides a very small,
and therefore unreliable, sample of student behavior.
Assessment Design and Item Development