Deep learning – session 4 : deep
generative models
1 Generative models
1.1 Discriminative vs generative models
Discriminative classifiers
o Learn the decision boundary between 2
classes
o learns p(y|x)
Generative classifiers
o Learn the distribution of the classes
o learns p(x|y) and also p(y) and can estimate p(x,y)
2 variational autoencoder (VAE)
Autoencoder A neural network that learns to compress data into a lower-
dimensional latent space and then reconstructs the original
data, capturing essential features in the process.
goal variational autoencoder:
What is the difference between a normal
Learn a continuous, probabilistic latent
autoencoder and a variational autoencoder?
space that can generate new, similar
A standard autoencoder encodes data
data points by sampling from learned
to fixed points, while a variational
distributions, fostering variability in
autoencoder (VAE) encodes data to
generated outputs.
probability distributions, allowing for
more diverse data generation.
instead of 1 encoding vector with
dimension n, there are now 2 vectors
each with dimension n
Vector with values of the mean: μ
Vector with standard deviations: σ
Figure 1: architecture of a VAE
generative models
1 Generative models
1.1 Discriminative vs generative models
Discriminative classifiers
o Learn the decision boundary between 2
classes
o learns p(y|x)
Generative classifiers
o Learn the distribution of the classes
o learns p(x|y) and also p(y) and can estimate p(x,y)
2 variational autoencoder (VAE)
Autoencoder A neural network that learns to compress data into a lower-
dimensional latent space and then reconstructs the original
data, capturing essential features in the process.
goal variational autoencoder:
What is the difference between a normal
Learn a continuous, probabilistic latent
autoencoder and a variational autoencoder?
space that can generate new, similar
A standard autoencoder encodes data
data points by sampling from learned
to fixed points, while a variational
distributions, fostering variability in
autoencoder (VAE) encodes data to
generated outputs.
probability distributions, allowing for
more diverse data generation.
instead of 1 encoding vector with
dimension n, there are now 2 vectors
each with dimension n
Vector with values of the mean: μ
Vector with standard deviations: σ
Figure 1: architecture of a VAE