Summary Discovering Statistics Using IBM SPSS Statistics
Summary Discovering Statistics Using IBM SPSS Statistics −2LL - the log-likelihood multiplied by minus 2. This version of the likelihood is used in logistic regression. α-level - the probability of making a Type I error (usually this value is 0.05). Adjusted mean - in the context of analysis of covariance this is the value of the group mean adjusted for the effect of the covariate. Adjusted predicted value - a measure of the influence of a particular case of data. It is the predicted value of a case from a model estimated without that case included in the data. The value is calculated by re-estimating the model without the case in question, then using this new model to predict the value of the excluded case. If a case does not exert a large influence over the model then its predicted value should be similar regardless of whether the model was estimated including or excluding that case. The difference between the predicted value of a case from the model when that case was included and the predicted value from the model when it was excluded is the DFFit. Adjusted R2 - a measure of the loss of predictive power or shrinkage in regression. The adjusted R2 tells us how much variance in the outcome would be accounted for if the model had been derived from the population from which the sample was taken. AIC (Akaike's information criterion) - a goodness-of-fit measure that is corrected for model complexity. That just means that it takes account of how many parameters have been estimated. It is not intrinsically interpretable, but can be compared in different models to see how changing the model affects the fit. A small value represents a better fit to the data. AICC (Hurvich and Tsai's criterion) - a goodness-of-fit measure that is similar to AIC but is designed for small samples. It is not intrinsically interpretable, but can be compared in different models to see how changing the model affects the fit. A small value represents a better fit to the data. Alpha factoring - a method of factor analysis. Alternative hypothesis - the prediction that there will be an effect (i.e., that your experimental manipulation will have some effect or that certain variables will relate to each other). Analysis of covariance - a statistical procedure that uses the F-statistic to test the overall fit of a linear model, adjusting for the effect that one or more covariates have on the outcome variable. In experimental research this linear model tends to be defined in terms of group means and the resulting ANOVA is therefore an overall test of whether group means differ after the variance in the outcome variable explained by any covariates has been removed. Analysis of variance - a statistical procedure that uses the F¬¬-statistic to test the overall fit of a linear model. In experimental research this linear model tends to be defined in terms of group means, and the resulting ANOVA is therefore an overall test of whether group means differ. ANCOVA - acronym for analysis of covariance. Anderson-Rubin method - a way of calculating factor scores which produces scores that are uncorrelated and standardized with a mean of 0 and a standard deviation of 1. ANOVA - acronym for analysis of variance. AR(1) - this stands for first-order autoregressive structure. It is a covariance structure used in multilevel linear models in which the relationship between scores changes in a systematic way. It is assumed that the correlation between scores gets smaller over time and that variances are assumed to be homogeneous. This structure is often used for repeated-measures data (especially when measurements are taken over time such as in growth models). Autocorrelation - when the residuals of two observations in a regression model are correlated. bi - unstandardized regression coefficient. Indicates the strength of relationship between a given predictor, i, of many and an outcome in the units of measurement of the predictor. It is the change in the outcome associated with a unit change in the predictor. βi - standardized regression coefficient. Indicates the strength of relationship between a given predictor, i, of many and an outcome in a standardized form. It is the change in the outcome (in standard deviations) associated with a one standard deviation change in the predictor. β-level - the probability of making a Type II error (Cohen, 1992, suggests a maximum value of 0.2). Bar chart - a graph in which a summary statistic (usually the mean) is plotted on the y-axis against a categorical variable on the x-axis (this categorical variable could represent, for example, groups of people, different times or different experimental conditions). The value of the mean for each category is shown by a bar. Different-coloured bars may be used to represent levels of a second categorical variable. Bartlett's test of sphericity - unsurprisingly, this is a test of the assumption of sphericity. This test examines whether a variance-covariance matrix is proportional to an identity matrix Therefore, it effectively tests whether the diagonal elements of the variance-covariance matrix are equal (i.e., group variances are the same), and whether the off-diagonal elements are approximately zero (i.e., the dependent variables are not correlated). Jeremy Miles, who does a lot of multivariate stuff, claims he's never ever seen a matrix that reached non-significance using this test and, come to think of it, I've never seen one either (although I do less multivariate stuff), so you've got to wonder about its practical utility. Bayes factor - the ratio of the probability of the observed data given the alternative hypothesis to the probability of the observed data given the null hypothesis. Put another way, it is the likelihood of the alternative hypothesis relative to the null. A Bayes factor of 3, for example, means that the observed data are 3 times more likely under the alternative hypothesis than under the null hypothesis. A Bayes factor less than 1 supports the null hypothesis by suggesting that the probability of the data given the null is higher than the probability of the data given the alternative hypothesis. Conversely, a Bayes factor greater than 1 suggests that the observed data are more likely given the alternative hypothesis than the null. Values between 1 and 3 are considered evidence for the alternative hypothesis that is 'barely worth mentioning', values between 3 and 10 are considered 'substantial evidence' ('having substance' rather than 'very strong') for the alternative hypothesis, and values greater than 10 are strong evidence for the alternative hypothesis.
Written for
- Institution
- Statistic reasoning
- Course
- Statistic reasoning
Document information
- Uploaded on
- January 18, 2024
- Number of pages
- 35
- Written in
- 2023/2024
- Type
- Exam (elaborations)
- Contains
- Questions & answers
Subjects
-
summary discovering statistics using ibm spss stat