Data types
- Cross-sectional data, one point in time but many measurements (units, like measures of households,
companies, districts and countries). This is especially useful to test economic theories on structural
relationships
- Time series data, a single or a few units collected at different point in time. Especially useful for
predictions of economic key figures.
- Panel data, Several units that are observed on at least two time points. A combination of cross-
sectional and time series data.
Simple linear regression
Y i=β 0 + β 1 X i+u i
Where Y is the dependent variable, X the independent variable,
β 0 the intercept, β 1 the slope and ui the regressor error.
The regression error consist of omitted components. These are the
other variables that influence Y other than X. It also includes errors in
the measurement of Y.
The sample mean
The least squares estimator of the population mean μY is the
sample mean:
n
min ∑ ( Y i−m )
2
m i=1
n
1
m=Ý = ∑ Y i
n i=1
How can we estimate the intercept and slope?
We will focus on the least squares estimator of the unknown parameters just like we did when calculating the
sample mean. We therefore have to solve:
n n
2 2
min ∑ ( Y i− Y^i ) =∑ ( Y i−( β 0 + β 1 X i))
β0 , β1 i=1 i=1
The OLS estimator minimises the average squared difference between the actual values and the predicted
values based on the estimated line.
The first order conditions for the intercept
n
∂ LS
=−2 ∑ Y i−( β 0+ β 1 X i )=0
∂ β0 i=1
n n
∑ (Y i)−n ^
β 0− ^
β 1 ∑ ( X i )=0
i=1 i=1
^ 1
n
β^1 n
β 0= ∑ (Y i)− ∑ ( X i )=Ý − ^
β 1 X́
n i=1 n i=1
The intercept doesn’t have content-related interpretation, if there are no observations where X=0. You can’t
make any conclusions outside your data range.
The first order conditions for the slope
n
∂ LS
=−2 ∑ ( Y i−( β 0 + β 1 X i ) ) X i=0
∂ β1 i=1
, n n n
∑ (Y i X i)− ^
β0 ∑ ( X i )− ^
β 1 ∑ ( X i )=0
2
i=1 i=1 i=1
n n n
∑ (Y i X i)−(Ý − ^β1 X́ )∑ ( X i )− β^1 ∑ ( X 2i )=0
i=1 i =1 i=1
n n n n
∑ (Y i X i)− Ý ∑ ( X i )− β^1 X́ ∑ ( X i ) = ^β1 ∑ ( X 2i )
i=1 i=1 i=1 i=1
n ^
n n n
1
∑
n i=1
Ý β
(
( Y i X i)− ∑ ( X i )= 1 X́ ∑ ( X i ) + ∑ ( X 2i )
n i=1 n i =1 i=1
)
n n
1
∑
n i=1
( Y i X i)−Ý X́ = ^
1
(
β 1 X́ 2 + ∑ ( X 2i )
n i=1 )
n
1
∑ (Y i−Ý )( X i− X́ ) n−1 s xy sample covariance
^
β 1= i=1 × = 2=
n
2 1 S x sample variance of x
∑ ( X i− X́ ) n−1
i=1
Residuals, the estimates of the unknown error terms
u^i=Y i −Y^i
Measures of fit
R2 , measures the fraction of the variance of Y that is explained by X. It is unitless and ranges
between zero (no fit) and one (perfect fit). For a regression with a single X, the R squared equals the
square of the correlation coefficient between X and Y.
Y i=Y^i + u^i
s Y =sY^ +s u^ → s=sample variance
Total SS=Explained SS + Residual SS
n 2
ESS RSS i=1
∑ (Y^ i−Y^´ )
R2= =1− = n
TSS TSS 2
∑ (Y i−Ý )
i=1
Prove:
STEP 1
We assume the residuals in the linear regression model and the regressor values X i are orthogonal, which
means:
n
∑ u^i X i=0
i=1
This we can prove the following way
, X i − X́
u^i (¿)
n n
∑ u^i X i=∑ ¿
i=1 i=1
We also know
u^i=Y i − ^
β 0− ^
β 1 X i =Y i−( Ý − ^
β 1 X́ )− β^1 X i=( Y i−Ý ) − ^
β 1( X i− X́ )
Putting this in the function above we get
X i− X́
X i− X́
n
2
( Y i−Ý ) (¿)− β^11 ∑ ( X i− X́ )
i=1
n
( ( Y i− Ý )− β^1 ( X i− X́ ))( ¿)=∑ ¿
i=1
n
∑¿
i=1
X i− X́
( Y i−Ý ) (¿)
n n
^
β 1 ∑ ( X i− X́ ) =∑ ¿
i=1 i=1
s s xy
^
β 1= =^
β1
s2x
Which proves the assumption that the regressors and the residuals are orthogonal.
STEP 2
STEP 3
Prove
n n n
∑ u^i ( Y^i −Ý )=∑ u^i Y^i −Ý ∑ u^i
i=1 i=1 i =1
We know the summations of the residuals are equal to zero because the mean is equal to zero. So the formula
becomes:
n n
u^i (¿ ^
β0 + ^
β 1 X i )= β^0 ∑ u^i + ^
β 1 ∑ u^i X I =0
i=1 i=1
n
∑¿
i=1
R2 can be zero because of two reasons:
- ^
β 1=0 which makes the intercept equal to the average of Y, which leads to a SSE of 0
- X is a constant, which means that the variance of X is zero. This leads to a ^
β 1 that is
undefined.