100% de satisfacción garantizada Inmediatamente disponible después del pago Tanto en línea como en PDF No estas atado a nada 4,6 TrustPilot
logo-home
Resumen

Class notes and Summary of materials Data Science Research Methods (JBM)

Puntuación
-
Vendido
-
Páginas
40
Subido en
24-11-2021
Escrito en
2021/2022

This document contains notes on the lectures of Alessandro Di Bucchianico and Thomas Klein. And also a summary of the compulsory reading material for each lecture. This one is the large, detailed summary. It contains almost each detail mentioned during the lectures.

Mostrar más Leer menos
Institución
Grado











Ups! No podemos cargar tu documento ahora. Inténtalo de nuevo o contacta con soporte.

Libro relacionado

Escuela, estudio y materia

Institución
Estudio
Grado

Información del documento

¿Un libro?
No
¿Qué capítulos están resumidos?
Parts of chapters 2,3,4,6, 9,10,11,16
Subido en
24 de noviembre de 2021
Número de páginas
40
Escrito en
2021/2022
Tipo
Resumen

Temas

Vista previa del contenido

Data Science Research Methods
JBM020

,Part 1: method that CAN with FIXED effects
19 april:
o Read: Sections 3.3.1. and 3.3.2. from experimental design
o Read: Chapter 2 from experimental design

3.3.1. p-Value

p-value: quantity of hypothesis testing . Represents the weight of
evidence against a null hypothesis.
In a graph, the p-value is the area to the right of the X value. We can thus
interpret is as the highest significance level for which we still accept H 0. If
α is pre-set, H 0 is rejected if the p-value is less than α , otherwise it is
accepted.

One-sided upper-tailed test: p-value is the area to the right of the test
statistic.
One-sided lower-tailed test: p-value is the area to the left of the test
statistic.
Two-sided test: p-value is double the area to the right or left (the smallest)
of the test statistic.

3.3.2. Type I and Type II Errors

Type I Error: the error of rejecting an H 0 when it is true.
Type II Error: the error or accepting an H 0 when it is false.

The significance level α =P∨(reject H 0∨H 0 true) is the probability that we
reject H 0 when it is true. This Type I error can be made smaller by
decreasing the value of α . However, than the Type II error becomes more
probable. It is a trade-off. The probability of an Type II error is
β=P( accept H 0 ∨H 0 false). Its value depends on the real value of μ. Therefore
is it different for each value of μ. As the separation between the mean
under H 0 and the assumed true mean under H 1 increases, β decreases.

The probability of correctly accepting an H 0 is 1−α and the probability of
correctly rejecting an H 0 is 1−β .

The optimal solution depends on the consequences of each type of error.
This makes it situation-specific.

,Chapter 2: One-Factor Designs and the Analysis
of Variance
2.1. One-Factor Designs

It studies the impact of a single factor on some performance measure.

Notation:
Y is the dependent variable.
X is the independent variable.
ε is a random error component, representing all other factors than X that
have an influence.
To show there is a functional relationship: Y =f ( X , ε ) .

Y ij → i is the value of Y for this person and j is the value of X .

Replicated experiment: it has more than one data value at each level
of the factor under study.
The number of rows, different values of Y , is the number of replicates. The
total number of experimental outcomes is the number of rows times the
number of columns.

2.1.1. The Statistical Model

An example is Y ij =μ+ τ j +ε ij with μ the mean and τ j the differential effect
associated with the j th level of X and ε ij the noise of error.

Those last three values need to be estimated.

2.1.2. Estimation of the Parameters of the Model
R
A column means is denoted as Y ∙ j=∑ Y ij / R .
i=1


Grand mean: the average of all RC data points, Y ∙ ∙ . It is the sum of all
values divided by RC ór the sum of all column means divided by C . If the
number of data points is not equal for each row, it can also be computed
as a weighted average of the columns means.

As criterion for those mean estimates, there is least squares: the optimal
estimation is the estimate that minimizes the sum of the squared
differences between the actual values and the “predicted values”. This
estimate is often labelled as e . It used T j as an estimate for τ j (using Y ∙ j−Y ∙ ∙
) and M as an estimate for μ (using Y ∙ ∙).

2 2
e ij =( Y ij −M −T j ) ∧∑∑ ( e ij ) =∑ ∑ ( Y ij −M −T j )

The ∑ ∑ is a summation over all R and again over all C , order does not
matter.

, From derivation the estimates, we get e ij =Y ij −Y ∙∙ .
$4.33
Accede al documento completo:

100% de satisfacción garantizada
Inmediatamente disponible después del pago
Tanto en línea como en PDF
No estas atado a nada


Documento también disponible en un lote

Conoce al vendedor

Seller avatar
Los indicadores de reputación están sujetos a la cantidad de artículos vendidos por una tarifa y las reseñas que ha recibido por esos documentos. Hay tres niveles: Bronce, Plata y Oro. Cuanto mayor reputación, más podrás confiar en la calidad del trabajo del vendedor.
datasciencestudent Technische Universiteit Eindhoven
Seguir Necesitas iniciar sesión para seguir a otros usuarios o asignaturas
Vendido
39
Miembro desde
5 año
Número de seguidores
31
Documentos
15
Última venta
10 meses hace

3.5

2 reseñas

5
1
4
0
3
0
2
1
1
0

Recientemente visto por ti

Por qué los estudiantes eligen Stuvia

Creado por compañeros estudiantes, verificado por reseñas

Calidad en la que puedes confiar: escrito por estudiantes que aprobaron y evaluado por otros que han usado estos resúmenes.

¿No estás satisfecho? Elige otro documento

¡No te preocupes! Puedes elegir directamente otro documento que se ajuste mejor a lo que buscas.

Paga como quieras, empieza a estudiar al instante

Sin suscripción, sin compromisos. Paga como estés acostumbrado con tarjeta de crédito y descarga tu documento PDF inmediatamente.

Student with book image

“Comprado, descargado y aprobado. Así de fácil puede ser.”

Alisha Student

Preguntas frecuentes