Deep learning – session 3:
autoencoders
1 What is an autoencoder
Concept autoencoder These are neural networks that learn efficient data
encoding by compressing input into a latent
representation and then reconstructing the original
data, minimizing the difference between input
and output (tries to reconstruct the input).
Deep autoencoder It has multiple hidden layers in both the encoder and
decoder, enabling it to learn more detailed and
complex data representations.
Prototype autoencoder consist of:
Fully connected layers: neurons in each layer are fully
connected to the next
2 parts:
o Encoder: compresses input data into a smaller latent
representation
o Decoder: reconstructs the original data from this compressed form
2 Types of autoencoders
Undercomplete autoencoders
Contractive autoencoder
Overcomplete autoencoder Stacked autoencoders
Sparse autoencoder Variational autoencoder
Convolutional autoencoder Denoising autoencoder
Undercomplete autoencoder It has a smaller hidden layer than the input,
creating a bottleneck. This forces the model to
compress the input and learn a more compact
representation (dimensionality reduction). When only
linear activation functions and MSE loss are used, the
autoencoder functions like a PCA model.
Overcomplete autoencoder An overcomplete autoencoder has a hidden layer
with more neurons than the input layer, meaning no
compression occurs. To prevent the autoencoder from
simply copying the input to the
output, regularization is used. Techniques include:
Sparse autoencoders
autoencoders
1 What is an autoencoder
Concept autoencoder These are neural networks that learn efficient data
encoding by compressing input into a latent
representation and then reconstructing the original
data, minimizing the difference between input
and output (tries to reconstruct the input).
Deep autoencoder It has multiple hidden layers in both the encoder and
decoder, enabling it to learn more detailed and
complex data representations.
Prototype autoencoder consist of:
Fully connected layers: neurons in each layer are fully
connected to the next
2 parts:
o Encoder: compresses input data into a smaller latent
representation
o Decoder: reconstructs the original data from this compressed form
2 Types of autoencoders
Undercomplete autoencoders
Contractive autoencoder
Overcomplete autoencoder Stacked autoencoders
Sparse autoencoder Variational autoencoder
Convolutional autoencoder Denoising autoencoder
Undercomplete autoencoder It has a smaller hidden layer than the input,
creating a bottleneck. This forces the model to
compress the input and learn a more compact
representation (dimensionality reduction). When only
linear activation functions and MSE loss are used, the
autoencoder functions like a PCA model.
Overcomplete autoencoder An overcomplete autoencoder has a hidden layer
with more neurons than the input layer, meaning no
compression occurs. To prevent the autoencoder from
simply copying the input to the
output, regularization is used. Techniques include:
Sparse autoencoders