scales of measurement (nominal, ordinal, interval, and ratio) Correct
Answer--nominal: measuring variables by them into one group or
another e.g. male/female
-ordinal: measuring variables by classifying them by
rank/order/magnitude e.g. Gold, Medal, Silver/ 1st/2nd/3rd
-interval: measuring variables by classifying them with numbers in term
of their order and distance between the numbers is at equal intervals
no true 0- the value of zero doesn't equal the absence of that thing
e.g. intelligence test, GPA
-ratio: measuring variables in same way as intervals, but can carry out all
possible math operations and uses true 0 e.g. salary $, # of children
descriptive vs. inferential statistics Correct Answer--descriptive
statistics: describes/summarizes data collected from the sample (can use
frequency tables, graphs, histograms, boxplots)
-inferential statistics: infers what the data means about the population
Central tendency and variability- how do you measure these? Correct
Answer--Central tendency: finding a typical score to make sense of data-
measured by finding mean (average), median (middle # when put in
order from least to greatest), or mode (# that shows up the most)
-Variability: how much data varies- measured by range (subtracting
highest and lowest scores), standard deviation (how much scores deviate
from mean score), variance (the standard deviation squared), and
sometimes interquartile range (like finding median of the bottom 25%
(quarter) and top 25% of scores and subtracting them (range))
,Statistical hypothesis testing-how to decide whether to reject/accept H0
(null hypothesis) Correct Answer-we use the p value (the likelihood of
Type I error/ the likelihood that you're rejecting the null when, in reality,
it's true/ the likelihood that you're wrong about the research hypothesis)
-If there's more than a 5% (or 1%) likelihood that you're wrong about the
research hyp (p>.05), we retain the H0.
-If there's less than a 5% chance you're wrong about the research hyp
(p<.05), we reject the H0- statistically significant results
Type I and Type II Errors Correct Answer-TYPE I: When you reject the
null, but the null is true (in reality, which we don't actually know). We
use the p value to make sure chances for this error are less than 5% or
1%
TYPE II: occurs when you retain the null, but the null is false; there's
statistically significant results and you actually had power, but sadly
didn't know it
Threats to Internal Validity/Confounds (subject selection, history,
maturation, and regression to the mean) Correct Answer-1. Subject
Selection problems- not selecting subjects carefully enough
(e.g. monkey ulcer study- no random assignment of monkeys so more
emotional (ulcer-prone) monkeys were a confound, finding opposite
results after repeat and proper random assignment)
2. History effect- something happens during treatment that effects the
results
(e.g. subjects participate in a study during final exams, incr. stress) (e.g.
college changes grading methods in the middle of study)
, 3. Maturation effect- depending on how long a study goes on for, the
changes could be due to natural development, maturation
(e.g. students given math test over time starting at 6th grade. they get
better after 6th grade bc they naturally got better, not bc of IV)
4. Regression (going back) to the Mean- tendency for participants to
score very high/very low (extreme), but score less extreme in follow-up
testing (closer to mean)- the follow-up score then shows their true ability
Other Threats to Internal Validity (attrition, practice, and
instrumentation) Correct Answer-1. Attrition- subjects leave study bc
they feel uncomfortable, meaning a drop in scores could've resulted from
a drop in participants (e.g. studies effect of math tutoring on test results-
group w/ no tutoring, many dropped out of study, leading to decr. in
scores)
2. Practice effect (called testing in book)- getting better at taking a test
bc you've practiced (taken it) multiple times
-can also get worse results from taking it multiple times due to fatigue
3. Instrumentation- changing the tools you're using from pretest to
posttest
How can an experimenter account for the effects threats to internal
validity (like history and maturation)? Correct Answer-by using a
control group
-this way, they could see that it was caused by confounds, not by the IV