Elisa Henderick I-human case with complete solution all
correct rated A+
Clear and specific (about more than one variable). Have more than one outcome.
Value free. Testable. - ANSWER: A good hypothesis should be
Be falsifiable (capable of being tested). Have at least two variables (one variable
cannot be defined in terms of another). - ANSWER: A good hypothesis must:
Variable used to explain another variable (causally related to the outcome, must
precede the dependent variable in time). - ANSWER: Independent Variable
Variable being explained (outcome you wish to study, what will change) - ANSWER:
Dependent variable
Statement that postulates the relationship between the independent variable and
dependent variable - ANSWER: Hypothesis
Concept we are investigating - ANSWER: Variable
Affects the strength or direction of the relationship between the IV & DV (but is not
causal). (If an intervention is effective only among females then gender is a
moderating variable) - ANSWER: Moderating Variable
Intervening mechanisms. Part of a causal chain between IV and DV. Moves the
independent variable to the dependent variable (characteristics of immigrants could
affect people's perceptions of them) - ANSWER: Mediating Variable
Empirical indicators that give evidence of presence or absence of concept being
studied. How we would recognize a variable in the real world. (Includes, frequency,
intensity and duration of what is being measured) - ANSWER: Operational Definition
Dictionary definition - tell us what a term means but not what we are using to
measure it in research - ANSWER: Nominal Definition
self-reports, direct observation, available records - ANSWER: Three Ways of
operationalizing variables
To Assess and improve the conceptualization, design, planning, administration,
implementation, effectiveness, efficiency and utility of social interventions and
human service programs - ANSWER: Purpose of Program Evaluation
Asks should program be continued? Quantitive - ANSWER: Summative Formulation
Obtaining information on improving and performance (qualitative or quantitative) -
ANSWER: Formative Evaluation
, Learn about stakeholders. Involve them in planning. Find out who wants evaluation.
Obtain feedback. Include logic model. Tailor form/style to needs to stakeholders.
Present negative findings tactfully. Provide suggestions for developing new
programs. Be realistic/practical. - ANSWER: Steps to Enhance Utilization of
Evaluation
Focus on identifying strengths and weakness in program processes and
recommending needed improvements. Tend to rely on qualitative methods (posing
as clients, asking staff members ??s) - ANSWER: Process Evaluation
Did the client change? How effective were we? Purpose is for feedback program
effectiveness. Trick is in specifying goals. Data collected at end of program cycle.
Analyzed and findings given to program at the end. Assumes program operates as
intended and is stable. - ANSWER: Program (Outcome Evaluation)
When the information we collect consistently reflects a FALSE picture of the concept
we seek to measure (because of the way we collected the data or the dynamics of
who is providing the data). Can be caused by BIAS (social desirability, cultural biases).
- ANSWER: Systematic Error
Result from difficulties in understanding or administrating measures. No consistent
pattern. Examples: when measurement is cumbersome and respondent gets bored,
when things are unclear. - ANSWER: Random (transient) error
Random error when people have difficulty understanding something in a
questionnaire or it is long and complex. Systematic error (bias) in how things are
worded, social desirability. People's words don't necessarily match their deeds
(validity). Inability to remember. Might say they are undecided when they are
worried they are in the minority. Question should be relevant to most respondents.
Might not be culturally competent. too cluttered, might miss question. - ANSWER:
Problems/biases with self-reports or questionnaires
CONSISTENCY (amount of random error). Likelihood measurement procedure will
yield same result/description of phenomenon if repeated. - ANSWER: Reliability
ACCURACY! Does the scale measure what it says it measures??? The extent to which
an empirical measure adequately reflects the real meaning of the concept. (extent of
systematic error in measurement). - ANSWER: Validity
determined by subjective assessments made by the researcher and other experts -
ANSWER: Face Validity
Try to use unbiased wording (to avoid systematic error). Try to use short, easy-to-
understand terms (to minimize random error). Obtain feedback from colleagues to
help spot biases or ambiguities you might have overlooked. Test the questionnaire in
a dry run with a handful of people to see if they understand it, etc. Avoid words like
correct rated A+
Clear and specific (about more than one variable). Have more than one outcome.
Value free. Testable. - ANSWER: A good hypothesis should be
Be falsifiable (capable of being tested). Have at least two variables (one variable
cannot be defined in terms of another). - ANSWER: A good hypothesis must:
Variable used to explain another variable (causally related to the outcome, must
precede the dependent variable in time). - ANSWER: Independent Variable
Variable being explained (outcome you wish to study, what will change) - ANSWER:
Dependent variable
Statement that postulates the relationship between the independent variable and
dependent variable - ANSWER: Hypothesis
Concept we are investigating - ANSWER: Variable
Affects the strength or direction of the relationship between the IV & DV (but is not
causal). (If an intervention is effective only among females then gender is a
moderating variable) - ANSWER: Moderating Variable
Intervening mechanisms. Part of a causal chain between IV and DV. Moves the
independent variable to the dependent variable (characteristics of immigrants could
affect people's perceptions of them) - ANSWER: Mediating Variable
Empirical indicators that give evidence of presence or absence of concept being
studied. How we would recognize a variable in the real world. (Includes, frequency,
intensity and duration of what is being measured) - ANSWER: Operational Definition
Dictionary definition - tell us what a term means but not what we are using to
measure it in research - ANSWER: Nominal Definition
self-reports, direct observation, available records - ANSWER: Three Ways of
operationalizing variables
To Assess and improve the conceptualization, design, planning, administration,
implementation, effectiveness, efficiency and utility of social interventions and
human service programs - ANSWER: Purpose of Program Evaluation
Asks should program be continued? Quantitive - ANSWER: Summative Formulation
Obtaining information on improving and performance (qualitative or quantitative) -
ANSWER: Formative Evaluation
, Learn about stakeholders. Involve them in planning. Find out who wants evaluation.
Obtain feedback. Include logic model. Tailor form/style to needs to stakeholders.
Present negative findings tactfully. Provide suggestions for developing new
programs. Be realistic/practical. - ANSWER: Steps to Enhance Utilization of
Evaluation
Focus on identifying strengths and weakness in program processes and
recommending needed improvements. Tend to rely on qualitative methods (posing
as clients, asking staff members ??s) - ANSWER: Process Evaluation
Did the client change? How effective were we? Purpose is for feedback program
effectiveness. Trick is in specifying goals. Data collected at end of program cycle.
Analyzed and findings given to program at the end. Assumes program operates as
intended and is stable. - ANSWER: Program (Outcome Evaluation)
When the information we collect consistently reflects a FALSE picture of the concept
we seek to measure (because of the way we collected the data or the dynamics of
who is providing the data). Can be caused by BIAS (social desirability, cultural biases).
- ANSWER: Systematic Error
Result from difficulties in understanding or administrating measures. No consistent
pattern. Examples: when measurement is cumbersome and respondent gets bored,
when things are unclear. - ANSWER: Random (transient) error
Random error when people have difficulty understanding something in a
questionnaire or it is long and complex. Systematic error (bias) in how things are
worded, social desirability. People's words don't necessarily match their deeds
(validity). Inability to remember. Might say they are undecided when they are
worried they are in the minority. Question should be relevant to most respondents.
Might not be culturally competent. too cluttered, might miss question. - ANSWER:
Problems/biases with self-reports or questionnaires
CONSISTENCY (amount of random error). Likelihood measurement procedure will
yield same result/description of phenomenon if repeated. - ANSWER: Reliability
ACCURACY! Does the scale measure what it says it measures??? The extent to which
an empirical measure adequately reflects the real meaning of the concept. (extent of
systematic error in measurement). - ANSWER: Validity
determined by subjective assessments made by the researcher and other experts -
ANSWER: Face Validity
Try to use unbiased wording (to avoid systematic error). Try to use short, easy-to-
understand terms (to minimize random error). Obtain feedback from colleagues to
help spot biases or ambiguities you might have overlooked. Test the questionnaire in
a dry run with a handful of people to see if they understand it, etc. Avoid words like