100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached 4.2 TrustPilot
logo-home
Summary

Summary Deep Learning summarization files of howest - creative technologies & ai

Rating
-
Sold
-
Pages
3
Uploaded on
16-12-2025
Written in
2024/2025

Deep Learning summarization files of howest - creative technologies & ai

Institution
Course








Whoops! We can’t load your doc right now. Try again or contact support.

Written for

Institution
Study
Course

Document information

Uploaded on
December 16, 2025
Number of pages
3
Written in
2024/2025
Type
Summary

Subjects

Content preview

Deep learning – session 3:
autoencoders
1 What is an autoencoder
Concept autoencoder These are neural networks that learn efficient data
encoding by compressing input into a latent
representation and then reconstructing the original
data, minimizing the difference between input
and output (tries to reconstruct the input).
Deep autoencoder It has multiple hidden layers in both the encoder and
decoder, enabling it to learn more detailed and
complex data representations.
Prototype autoencoder consist of:
 Fully connected layers: neurons in each layer are fully
connected to the next
 2 parts:
o Encoder: compresses input data into a smaller latent
representation
o Decoder: reconstructs the original data from this compressed form


2 Types of autoencoders
 Undercomplete autoencoders
 Contractive autoencoder
 Overcomplete autoencoder  Stacked autoencoders
 Sparse autoencoder  Variational autoencoder
 Convolutional autoencoder Denoising autoencoder

Undercomplete autoencoder It has a smaller hidden layer than the input,
creating a bottleneck. This forces the model to
compress the input and learn a more compact
representation (dimensionality reduction). When only
linear activation functions and MSE loss are used, the
autoencoder functions like a PCA model.




Overcomplete autoencoder An overcomplete autoencoder has a hidden layer
with more neurons than the input layer, meaning no
compression occurs. To prevent the autoencoder from
simply copying the input to the
output, regularization is used. Techniques include:
 Sparse autoencoders
$9.65
Get access to the full document:

100% satisfaction guarantee
Immediately available after payment
Both online and in PDF
No strings attached

Get to know the seller
Seller avatar
ellenflame

Also available in package deal

Get to know the seller

Seller avatar
ellenflame Hogeschool West-Vlaanderen
Follow You need to be logged in order to follow users or courses
Sold
New on Stuvia
Member since
1 day
Number of followers
0
Documents
6
Last sold
-

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Frequently asked questions