CPCE: Assessment Questions and Answers 2023
CPCE: Assessment Questions and Answers 2023 Measurement General process of determining the dimensions of an attribute or trait. Assessment Processes and procedures for collecting information about human behavior. Assessment tools include tests and inventories, rating scales, observation, interview data and other techniques. Appraisal Appraisal implies going beyond measurement to making judgments about human attributes and behaviors and is used interchangeably with evaluation. Interpretation Making a statement about the meaning or usefulness of measurement data according to the professional counselor's knowledge and judgment. Measures of Central Tendency A distribution of scores (measurements on a number of individuals) can be examined using the following measures: Mean; the arithmetic average symbolized by X or M Median: the middle score in a distribution of scores Mode: the most frequent score in a distribution of scores All three of these fall in the same place (are identical) when the distribution of scores is symmetrical, i.e., normally distributed (not skewed). Skew The degree to which a distribution of scores is not normally distributed Positive Skew: tail to right Negative Skew: tail to left The mean is pulled in the direction of the extreme scores represented by the tail of a skewed distribution. Range Measure of variability. The highest score minus the lowest score. Some researchers talk of inclusive range which is the high score minus the low score and adding one (1). Standard Deviation Measure of variability. this value describes the variability within a distribution of scores. We use the symbol SD to signify the standard deviation of a sample. When we talk about the population's variability, we use the symbol a (sigma). Standard deviation is essentially the mean of all the deviations from the mean. It is an excellent measure of the dispersion of scores. Variance Measure of variability. the square of the standard deviation. The variance does not describe the dispersion of scores as well as the standard deviation. However, we will see it again in the next section when we talk about analysis of variance. Normal/Bell Curve The normal curve essentially distributes the scores (individuals) into six equal parts—three above the mean and three below the mean. Counselors should be familiar with the distribution of scores within the normal curve: 34% and 34% = 68% and comprises one standard deviation and 13.5% and 13.5% = 95% and comprises two standard deviations and 2% and 2% = 99% and comprises three standard deviations. Percentile A value below which a specified percentage of cases fall. Stanine From standard nine, converts a distribution of scores into nine parts (1 to 9) with five in the middle and a standard deviation of about 2. Standardized Scores A standardized score scale is like a 'common language' that we can use to compare several different test scores for the same individual. Standardized scores occur by converting raw score distributions. These derived scores provide for constant normative or relative meaning allowing for comparisons between individuals. Specifically, standardized scores express the person's distance from the mean in terms of the standard deviation of that standard score distribution. Standardized scores are continuous and have equality of units. Most common are z-score and T score. z-score The mean is 0; the standard deviation is 1.0. (See normal curve figure.) The range for the standard deviation is -3.0 to 3.0. The Z in z-score should remind you of zero which is the mean of this distribution. T Score The mean ofthis standardized score scale is 50 and the standard deviation is 10. By (T)ransforming this standard score, negative scores are eliminated unlike the z-score. (See normal curve figure.) The T should remind you often which is the standard deviation unit of this distribution. Correlation Coefficient The Pearson Product-Moment Correlation Coefficient (r) is frequently used. A correlation coefficient ranges from -1.00 (a perfect negative correlation) to 1.00 (a perfect positive correlation). This is a statistical index which shows the relationship between two sets of numbers. When a very strong correlation exists, if you know one score of an individual you can predict (to a large degree) the other score of that person. A correlation between two variables is called bivariate; between three or more variables, it is called multivariate. The correlation coefficient tells you nothing about cause and effect, only the degree of relationship. Reliability Reliability is the consistency of a test or measure; the degree to which the test can be expected to provide similar results for the same subjects on repeated administrations. Reliability can be viewed as the extent to which a measure is free from error. If the instrument has little error, it is reliable. A correlation coefficient is used to determine reliability. If the reliability coefficient is high, about .70 or higher, test scores have little error and the instrument is said to be reliable. Reliability is a necessary psychometric property of tests and measures. Types of Reliability: Stability This is test-retest reliability obtained using the same instrument on both occasions - same group tested twice. The results of the two administrations are correlated. The length of time and intervening experiences may influence stability reliability. Two weeks is a good time between test administrations. Types of Reliability: Equivalence Alternate forms of the same test are administered to the same group and the correlation between them is calculated. How comparable the forms of the tests are will influence this reliability. Intervening events and experiences may also influence reliability. Types of Reliability: Internal Consistency In this split-half method, the test is divided into two halves. The correlation between these two halves is calculated. Because you reduce the length of the test (one-half versus one-half) you necessarily reduce its measured reliability. Consequently, you may apply the Spearman-Brown formula (sometimes called 'prophecy' formula) to see how reliable the test would be had you not split it in two. Internal consistency may also be determined by measuring interitem consistency. The more homogeneous the items, the more reliable the test Kuder-Richardson formulas (there are two) are used if the test contains dichotomous items (such as true-false, yes-no). If the instrument contains nondichotomous items (such as multiple choice, essay), Cronbach alpha coefficient is applied. True and Error Variance Tests measure "true" and "error" variance. You want to measure true variance, the actual psychological trait or characteristic that the test is measuring. For example: Two tests are administered. Each one measures true variance (T1 and T2) and error variance (E1 and E2). If the correlation between two tests or two forms of the same test is, for example, .90, then the amount of true variance measured in common is the correlation squared (.90^2 = 81%). Coefficient of Determination is the degree of common variance. It is the index (81%) that results from squaring the correlation (.90). Coefficient of Nondetermination is the unique variance, not common. For the above example, it would be 19% and represents the error variance. Standard Error of Measurement (SEM) another measure of reliability and useful in interpreting the test scores of an individual. The SEM may also be referred to as Confidence Band or Confidence Limits. The standard error of measurement helps determine the range within which an individual's test score probably falls. For example: A person scores a 92 on a test. The test's SEM = 5.0. Chances are about 2 in 3 (67%) that the person's score falls between 87 and 97. (Refer to the normal curve: 34% and 34% of the cases fall within one standard deviation, positive and negative, for a total of 68%). For the same test with the same SEM of 5.0, you can say that 95% of the time the person's score would fall within the range of 82 and 102. Every test has its own unique value of SEM which is calculated in advance and may be reported on the test's score profile. Validity Validity is the degree to which a test measures what it purports to measure for the specific purpose for which it is used. In other words, validity is situation specific - depending on the purpose and population. An instrument may be valid for some purposes and not others. Types of Validity: Face The instrument looks valid. For example: A math test has math items. This 'validity' could be important from the test-taker's perspective. Types of Validity: Content The instrument contains items drawn from the domain of items which could be included. For example: Two professors of Psychology 101, devise a final exam which covers the important content that they both teach. Types of Validity: Predictive The predictions made by the test are confirmed by later behavior (criterion). For example: The scores on the Graduate Record Exam predict later grade point average. Types of Validity: Concurrent The results of the test are compared with other tests' results or behaviors (criteria) at or about the same time. For example: Scores of an art aptitude test may be compared to grades already assigned to students in an art class. Types of Validity: Construct A test has construct validity to the extent it measures some hypothetical construct such as anxiety, creativity, etc. Usually several tests or instruments are used to measure different components of the construct or of the hypothesized relationships between that construct and other constructs. Convergent validation occurs when there is high correlation between the construct under investigation and others. Discriminant validation occurs when there is no significant correlation between the construct under investigation and others. The construct validation process is best when multiple traits are being measured using a variety of methods. Reliability and Validity Relationship Tests may be reliable but not valid. Valid tests are reliable unless of course there is a change in the underlying trait or characteristic which might occur through maturation, training or development. Power Based vs. Speed Based Tests Power based; no time limits or very generous ones (such as the NCE and CPCE). Speed based: timed, and the emphasis is placed on speed and accuracy. Examples are measures of intelligence, ability and aptitude. Norm Referenced Assessment Comparing individuals to others who have taken the test before. Norms may be national, state or local. In norm-referenced testing, how you compare with others is more important than what you know. Criterion Referenced Assessment Comparing an individual's performance to some predetermined criterion which has been established as important. The National Counselor Exam's cut-off score is an example. For the CPCE, university programs are allowed to determine the criterion (cut-off score). Criterion referenced is sometimes called domain referenced. Ipsatively Interpreted Assessment Comparing the results on the test within the individual. For example, looking at an individual's highs and lows on an aptitude battery which measures several aptitudes. There is no comparison with others. Another example of ipsative is when an individual's score on a second test is compared to the score on the first test. A maximal performance test may generate a person's best performance on an aptitude or achievement test and a typical performance may occur on an interest or personality test. Purposes/Rationale for Using Tests a. help the counselor decide if the client's needs are within the range of his or her services. b. help the client gain self-understanding. c. help the counselor gain a better understanding of the client. d. assist the counselor in determining which counseling methods, approaches or techniques will be suitable. e. assist the counselee to predict future performance in education, training or work. f. help counselees make decisions about their educational or work futures. g. help identify interests not previously known. h. help evaluate the outcomes of counseling. Circumstances under which testing may be useful a. placement—in education or work settings b. admissions—such as undergraduate, graduate or professional schools c. diagnosis d. counseling e. educational planning f. evaluation g. licensure and certification h. self-understanding Regression Towards the Mean Statistical regression means that if one earns a very low score (at 15% or lower) or very high score (at 85% or higher) on a pretest, the individual will probably earn a score closer to the mean on the posttest. This is because of the error occurring due to chance, personal and environmental factors. These factors can reliably be expected to be different on the posttest. Standardized vs. Nonstandardized Assessment Standardized: the instruments are administered in a formal, structured procedure and the scoring is specified. Nonstandardized: there are no formal or routine instructions for administration or for scoring. Some examples may be checklists or rating scales. Tests and Inventories: Intelligence Intelligence is the ability to think in abstract terms; to learn. Some also believe it is the ability to adapt to the environment and adjust to it. It is also called general ability or cognitive ability. Intelligence Tests: Stanford-Binet Intelligence Scales WechslerAdult Intelligence Scale (WAIS - IV) Wechsler Intelligence Scalefor Children (WISC-IV) Cognitive Abilities Test Specialized Ability Tests: Kaufman Assessment Battery for Children - II System of Multicultural Pluralistic Assessment (SOMPA). It measures medical, social systems and pluralistic factors. ACT(American College Test Program) SAT Reasoning Test Miller Analogies Test (MAT) Graduate Record Exam (GRE) Tests and Inventories: Achievement Measures the effects of learning or a set of experiences. These tests may be used diagnostically. Many states have their own K-12 achievement tests. A national measure of academic performance is National Assessment of Educational Progress (NAEP). Other tests available include: California Achievement Tests Iowa Tests of Basic Skills Stanford Achievement Test Specialized Achievement Tests: General Education Development (GED) College Board's Advanced Placement Program College-Level Examination Program (CLEP) Test and Inventories: Aptitude Also called ability tests, these measure the effects of general learning and are used to predict future performance. Each of those listed here measures several abilities or aptitudes. Differential Aptitude Tests (DAT) O*NetAbility Profiler (formerly, General Aptitude Test Battery, GATE) Armed Services Vocational Aptitude Battery (ASVAB) Career Ability Placement Survey (CAPS)
Escuela, estudio y materia
- Institución
- CPCE
- Grado
- CPCE
Información del documento
- Subido en
- 21 de julio de 2023
- Número de páginas
- 11
- Escrito en
- 2022/2023
- Tipo
- Examen
- Contiene
- Preguntas y respuestas
Temas
- stan
-
cpce assessment questions and answers 2023
-
measurement general process of determining the dim
-
measures of central tendency a distribution of sco
-
skew the degree to which a distribution of scores
Documento también disponible en un lote