R. Pieters’ hints during class What will probably be asked in the exam
- You will get questions from an existing survey and will have to indicate what is wrong about them,
what the error is called, and you will have to change it.
This summary includes all topics mentioned below and all topics that had a thumbs up in the slides,
plus the extra topics mr. Pieters mentioned would be important.
Common topics open ended questions:
- Survey errors Know these by heart. Even if they do not appear during the exam, it is good
to know them (you will be a better person, and your hair will shine more)
- Comprehension, memory, and evaluation sins e.g. evaluate a set of questions, or list sins
of comprehension, and give examples
- Survey modes e.g. given information on survey requirements – you decide and explain
which survey mode to choose
- Sources of Endogeneity Bias What their implications are, and how to detect these
- Sampling e.g. compute sample size (also fpc and cluster) !! e.g. link examples of surveys to
specific sampling methods
- Non-response analysis e.g. describe options and how to execute these
- Weighting class and/or propensity weighting e.g. compute (norm.) weight or response
probability
- Data cleaning – Outlier analysis Given information about some cases, determine which to
delete, retain, mark, change, and why
- Reliability (Cronbach’s alpha) e.g. given correlation matrix – compute alpha
- Multilevel analysis Interpret output, give model, compute ICC
Multiple choice Q’s (example topics)
Survey errors
Validity and reliability
Types of questions and response scales
Survey modes
Sampling methods
Sample size calculation
Methods to deal with sensitive topics
- Randomized Response
- List technique
Weighting
Best and worst item-imputation methods
Outliers
Response rate calculation (RR1,2,56)
ICC calculation
Reliability calculation
Differences between scale and index, which type of reliability for scale and index
Mediation and Moderation
- Give a graph/display of a model (boxes-and-arrows), which are required regression
equations
,Equations
,Survey Errors
- Learn where which error occurs
9 survey errors and example of each
I. Conceptualization error – constructs do not map on properties
II. Operationalization error – systematic departure from the constructs – an invalid measure / a
bad scale
III. Coverage error – sampling frame does not capture the population (sampling frame ≠
population) and covered units differ systematically from non-covered
IV. Sampling error – deliberate error because of cost or feasibility –
1. Accidental = Goals of sampling unmet and uncorrected (mistakes)
, 2. Deliberate = cluster sampling adds sampling error for cost effectiveness, and statistically
corrects for the error. Lack of accessibility of population units.
V. Measurement error – systematic departure of measurement from true value of the
constructs and their relationship due to measurement process itself. – consumers interpret
question wrong
VI. Processing error – systematic departure of edited responses from true value of constructs
due to editing
VII. Nonresponse error – when non-respondents systematically differ from respondents
VIII. Adjustment error – biased attempts to correct for processing, coverage, sampling, and
nonresponse errors
IX. Analysis and reporting error – systematic mistakes in analysis and reporting that reduce the
validity of findings.
DEFINITELY A QUESTION ABOUT THIS
Validity & Reliability
Is validity only present or important in the measurement part? NO!! In all 3 parts (measurement,
representation, analysis & reporting)
What is validity?
Does it measure what it’s supposed to do? Face validity: Do these coefficients make sense?
What types of validity exist?
- Maximum accuracy (maximize accuracy of your measures)
- Minimal bias (bias = difference between what is true and what you think is)
Validity = Accuracy – Low bias
1. Measurement
• Responses to measures accurately capture the true properties
2. Representation
• Respondents of samples accurately capture the true population
3. Analysis and Reporting
• Reported analyses accurately capture true properties of the population
What is reliability?
Is it stable? How much do these results change if
• We add additional control variables to the model
• We take away some observations
• We estimate the same model on a new dataset
What types of reliability exist?
- Maximum replicability (test it again, is it the same?)
- Minimal variance
Reliability = Replicability – Low variance
1. Measurement
• Responses to measures replicate
2. Representation
• Respondents from samples of the population replicate
• When you sample does not overlap sufficiently with your population, you have bias!
3. Analysis and Reporting
• Reported results of analyses replicate