Hoorcolleges Survey (Fall)
Literature – Lecture 1
Biased survey
Biased survey = one that encompasses errors caused by the design of the survey and its
questions
Things to consider: wording of questions, structure of survey, design, style & colors
Biased survey can lead to survey response bias and higher than normal drop-out rates
1. The Leading Question
Mistake that survey creators make is creating a question that leads respondents to give
the “correct” answer; this negates your survey results
To avoid this, use neutral wording
Leading Question: How dumb is *politician* when it comes to foreign policy? This
immediately brings a negative connotation to the question.
Instead, you might ask the question: Please describe your politician’s position on
foreign policy.
2. The Loaded Question
Forces people into answering the question in a particular way
You keep them from explaining their own opinions ---> leads to survey drop-out and
unclear results
Loaded question: Where do you like to party? Well, what if the respondents don’t
like to party? What if they are homebodies?
Instead, you could ask it like this: What do you like to do on weekend evenings?
3. The Double-Barreled Question
Forces your respondents to answer two questions at once
But: you want each one of your survey questions to only answer one thing
Double-barreled question: How happy or unhappy are you with the rate of current
school board funding and the common-core curriculum? Wow! This is asking a lot of
your respondents. Some might answer both questions, but many others will
concentrate on the one that means the most to them.
Instead, you could ask it like this: How happy or unhappy are you with the rate of
current school board funding? And, next question: What do you think of the
common core?
4. The Absolute Question
Yes or no answers can keep respondents from leaving unbiased feedback; it creates a
bias because you aren’t getting the whole story with this type of question
Usually only has “yes” or “no” answer; it also commonly includes words such as
all/always/ever/every
Absolute question: Do you always shower before bed?
Instead, you could ask it like this: How many nights a week do you shower before
bed?
5. The Unclear Question
, Tech jargon and acronyms create bias because only some of the people in your audience
know what you are talking about
Unclear question: Do you have an IPhone?
- BUT: Not everyone has an IPhone!
Instead, you could ask it like this: Do you have a smartphone?
6. The Multiple Answer Question
Make sure there is only one answer
7. Prefer Not to Answer
Include this in your answer choices
Many people will drop-out of a survey if they are uncomfortable with a question
8. Include All Possible Answers
Not including all possible answers also creates bias
You can always add “other” as a choice
9. Use Accurate Scales
When asking people to rate your question, you want to offer options ranging from bad to
excellent to avoid bias
“What is your experience with our customer service team?”
- If you leave off “poor” as an option, you’ve biased the survey
- A solution would be an NPS survey question which has standardized question
with a rating of 1-10 no matter where or when it is served to visitors
10. Survey Structure
Study and test your survey to root out poor structure of the survey questions
Ask more personal or in-depth questions at the end to avoid survey drop-out
Krosnick & Presser (2010) - Question and Questionnaire Design
General information
Survey results depend crucially on the questionnaire that scripts the conversation
between researchers and respondents
To minimize response errors, questionnaires should be crafted in accordance with best
practices
Open versus closed questions
This difference is especially relevant to three types of measurement: (1) asking for
choices among nominal categories (“What is the most important problem facing the
country?”), (2) ascertaining numeric quantities (“How many hours did you watch
television this week?”) and (3) testing factual knowledge (“Who is Joe Biden?”)
Social desirability response bias = a form of motivated misreporting
Recall bias = a form of unmotivated misreporting
Optimal question design
,1. Use simple, familiar words (avoid technical terms, jargon, and slang)
2. Use simple syntax
3. Avoid words with ambiguous meanings, i.e., aim for wording that all respondents will
interpret in the same way
4. Strive for wording that is specific and concrete (as opposed to general and abstract)
5. Make response options exhaustive and mutually exclusive
6. Avoid leading or loaded questions that push respondents toward an answer
7. Ask about one thing at a time (avoid double-barreled questions)
8. Avoid questions with single or double negations.
Optimal question order
1. Early questions should be easy and pleasant to answer, and should build rapport between
the respondent and the researcher
2. Questions at the very beginning of a questionnaire should explicitly address the topic of
the survey, as it was described to the respondent prior to the interview
3. Questions on the same topic should be grouped together
4. Questions on the same topic should proceed from general to specific
5. Questions on sensitive topics that might make respondents uncomfortable should be
placed at the end of the questionnaire
6. Filter questions should be included, to avoid asking respondents questions that do not
apply to them.
Video on latent variables
Latent variables
Hypothetical ---> we know it exists, but we can’t directly sense/see/smell/taste it
Also named as “construct”
Manifest variables
Something that is real, something that we can sense/see/smell/taste it
What about these variables?
There needs to be a direct connection between the manifest items and the latent
variables that we assess
When we create items on a survey, where we ask people about something, those are
going to be the manifest items relating to those constructs
E.g., you can’t see an obsession of someone, so we need to make items that really
study the obsession of someone
Items that are measuring the same construct should be correlated, so you use slightly
different wordings for items
Lecture 1 – 27/08/2024 – Introduction to Surveys
Course objectives:
- Explain the cognitive processes through which people answer survey questions
- Design a well-considered survey
- Adapt
- Analyze data
, How to design a survey?
1. What do you want to measure?
2. From the theory to questions and answers
3. Phrase specific items
4. Develop the survey
5. Pretest the survey
6. Run
Step 1: What do you want to measure
Check the exact research objectives (with supervisor)
Specification of your “rough” assignment
Manifest variables:
- Can be directly observe
- Height, hair colour
Latent variables:
- Variables we cannot exactly measure
- Indirectly measured
- Wealth, intelligence
- Attitudes as political efficacy, being introvert, comprehensibility, risk perception
Latent variables:
Multiple questions, because:
- The concepts are abstract and multi-faceted
- When you ask multiple question about the same construct you will at the very least be
able to establish that you have measured one underlying thing
- You can detect/decrease the influence
Self-report measures of a latent construct (e.g. depression)
- Measurements that represent a set of indicators of the latent construct
- If you score high on y, then this should reflect in A,B,C…
- E.g. depression consists of sleep problems, depressed mood, loss of interest, etc.
Uni- vs. multidimensional scales
Step 2: From theory to question to answer
Option 1: existing scales
- E.g. need for cognition, privacy concerns
Advantage: validated!
Disadvantages: very generic, not personalized to your research, language/translations.
Option 2: no scale available develop your own items
- Internal method (inductive): May items are used and through statistical grouping
techniques it is decided after the fact which ones were relevant
Literature – Lecture 1
Biased survey
Biased survey = one that encompasses errors caused by the design of the survey and its
questions
Things to consider: wording of questions, structure of survey, design, style & colors
Biased survey can lead to survey response bias and higher than normal drop-out rates
1. The Leading Question
Mistake that survey creators make is creating a question that leads respondents to give
the “correct” answer; this negates your survey results
To avoid this, use neutral wording
Leading Question: How dumb is *politician* when it comes to foreign policy? This
immediately brings a negative connotation to the question.
Instead, you might ask the question: Please describe your politician’s position on
foreign policy.
2. The Loaded Question
Forces people into answering the question in a particular way
You keep them from explaining their own opinions ---> leads to survey drop-out and
unclear results
Loaded question: Where do you like to party? Well, what if the respondents don’t
like to party? What if they are homebodies?
Instead, you could ask it like this: What do you like to do on weekend evenings?
3. The Double-Barreled Question
Forces your respondents to answer two questions at once
But: you want each one of your survey questions to only answer one thing
Double-barreled question: How happy or unhappy are you with the rate of current
school board funding and the common-core curriculum? Wow! This is asking a lot of
your respondents. Some might answer both questions, but many others will
concentrate on the one that means the most to them.
Instead, you could ask it like this: How happy or unhappy are you with the rate of
current school board funding? And, next question: What do you think of the
common core?
4. The Absolute Question
Yes or no answers can keep respondents from leaving unbiased feedback; it creates a
bias because you aren’t getting the whole story with this type of question
Usually only has “yes” or “no” answer; it also commonly includes words such as
all/always/ever/every
Absolute question: Do you always shower before bed?
Instead, you could ask it like this: How many nights a week do you shower before
bed?
5. The Unclear Question
, Tech jargon and acronyms create bias because only some of the people in your audience
know what you are talking about
Unclear question: Do you have an IPhone?
- BUT: Not everyone has an IPhone!
Instead, you could ask it like this: Do you have a smartphone?
6. The Multiple Answer Question
Make sure there is only one answer
7. Prefer Not to Answer
Include this in your answer choices
Many people will drop-out of a survey if they are uncomfortable with a question
8. Include All Possible Answers
Not including all possible answers also creates bias
You can always add “other” as a choice
9. Use Accurate Scales
When asking people to rate your question, you want to offer options ranging from bad to
excellent to avoid bias
“What is your experience with our customer service team?”
- If you leave off “poor” as an option, you’ve biased the survey
- A solution would be an NPS survey question which has standardized question
with a rating of 1-10 no matter where or when it is served to visitors
10. Survey Structure
Study and test your survey to root out poor structure of the survey questions
Ask more personal or in-depth questions at the end to avoid survey drop-out
Krosnick & Presser (2010) - Question and Questionnaire Design
General information
Survey results depend crucially on the questionnaire that scripts the conversation
between researchers and respondents
To minimize response errors, questionnaires should be crafted in accordance with best
practices
Open versus closed questions
This difference is especially relevant to three types of measurement: (1) asking for
choices among nominal categories (“What is the most important problem facing the
country?”), (2) ascertaining numeric quantities (“How many hours did you watch
television this week?”) and (3) testing factual knowledge (“Who is Joe Biden?”)
Social desirability response bias = a form of motivated misreporting
Recall bias = a form of unmotivated misreporting
Optimal question design
,1. Use simple, familiar words (avoid technical terms, jargon, and slang)
2. Use simple syntax
3. Avoid words with ambiguous meanings, i.e., aim for wording that all respondents will
interpret in the same way
4. Strive for wording that is specific and concrete (as opposed to general and abstract)
5. Make response options exhaustive and mutually exclusive
6. Avoid leading or loaded questions that push respondents toward an answer
7. Ask about one thing at a time (avoid double-barreled questions)
8. Avoid questions with single or double negations.
Optimal question order
1. Early questions should be easy and pleasant to answer, and should build rapport between
the respondent and the researcher
2. Questions at the very beginning of a questionnaire should explicitly address the topic of
the survey, as it was described to the respondent prior to the interview
3. Questions on the same topic should be grouped together
4. Questions on the same topic should proceed from general to specific
5. Questions on sensitive topics that might make respondents uncomfortable should be
placed at the end of the questionnaire
6. Filter questions should be included, to avoid asking respondents questions that do not
apply to them.
Video on latent variables
Latent variables
Hypothetical ---> we know it exists, but we can’t directly sense/see/smell/taste it
Also named as “construct”
Manifest variables
Something that is real, something that we can sense/see/smell/taste it
What about these variables?
There needs to be a direct connection between the manifest items and the latent
variables that we assess
When we create items on a survey, where we ask people about something, those are
going to be the manifest items relating to those constructs
E.g., you can’t see an obsession of someone, so we need to make items that really
study the obsession of someone
Items that are measuring the same construct should be correlated, so you use slightly
different wordings for items
Lecture 1 – 27/08/2024 – Introduction to Surveys
Course objectives:
- Explain the cognitive processes through which people answer survey questions
- Design a well-considered survey
- Adapt
- Analyze data
, How to design a survey?
1. What do you want to measure?
2. From the theory to questions and answers
3. Phrase specific items
4. Develop the survey
5. Pretest the survey
6. Run
Step 1: What do you want to measure
Check the exact research objectives (with supervisor)
Specification of your “rough” assignment
Manifest variables:
- Can be directly observe
- Height, hair colour
Latent variables:
- Variables we cannot exactly measure
- Indirectly measured
- Wealth, intelligence
- Attitudes as political efficacy, being introvert, comprehensibility, risk perception
Latent variables:
Multiple questions, because:
- The concepts are abstract and multi-faceted
- When you ask multiple question about the same construct you will at the very least be
able to establish that you have measured one underlying thing
- You can detect/decrease the influence
Self-report measures of a latent construct (e.g. depression)
- Measurements that represent a set of indicators of the latent construct
- If you score high on y, then this should reflect in A,B,C…
- E.g. depression consists of sleep problems, depressed mood, loss of interest, etc.
Uni- vs. multidimensional scales
Step 2: From theory to question to answer
Option 1: existing scales
- E.g. need for cognition, privacy concerns
Advantage: validated!
Disadvantages: very generic, not personalized to your research, language/translations.
Option 2: no scale available develop your own items
- Internal method (inductive): May items are used and through statistical grouping
techniques it is decided after the fact which ones were relevant