Vanilla Autoencoder
• Whatis it?
Reconstruct high-dimensional data using a neural network model with a narrow
bottleneck layer.
The bottleneck layer captures the compressed latent coding, so the nice by-product
is dimension reduction.
The low-dimensional representation can be used as the representation of the data
in various applications, e.g., image retrieval, data compression …
�
�
!
�
�
�
�
ℒ
4
5.
Latent code: thecompressed low
dimensional representation of the
input data
Vanilla Autoencoder
�
�
!
decoder/generator
Z X
encoder
X Z
• How it works?
Input
ℒ
𝑥
𝑧
Reconstructed Input
Ideally the input and reconstruction are identical
The encoder
network is for
dimension
reduction, just
like PCA
5
6.
Vanilla Autoencoder
• Training
𝑥1
𝑥2
𝑥3
𝑎1
𝑎2
𝑎3
𝑥4
𝑥5
𝑥6
𝑎4
𝑥
#
1
𝑥
#
2
𝑥
#
3
𝑥
#
4
𝑥
#
5
𝑥
#
6
inputlayer hidden layer output layer
• The hidden units are usually less than the number
of inputs
• Dimension reduction --- Representation learning
The distance between two data can be measure by
Mean Squared Error (MSE):
𝑛 𝑖 ' 1
ℒ = 1
∑𝑛
(𝑥𝑖 − 𝐺(𝐸
𝑥𝑖
6
) 2
where 𝑛 is the number of variables
• It is trying to learn an approximation to the
identity function so that the input is “compress”
to the “compressed” features, discovering
interesting structure about the data.
Encoder
Decoder
7.
Vanilla Autoencoder
• Testing/Inferencing
inputlayer hidden layer
𝑥1
𝑥2
𝑥3
𝑎1
𝑎2
𝑎3
𝑥4
𝑥5
𝑥6
𝑎4
extracted features
• Autoencoder is an unsupervised learning
method if we considered the latent code as the
“output”.
• Autoencoder is also a self-supervised (self-taught)
learning method which is a type of supervised
learning where the training labels are
determined by the input data.
• Word2Vec (from RNN lecture) is another
unsupervised, self-taught learning example.
Autoencoder for MNIST dataset (28×28×1,
784 pixels)
7
�
�
�
�
%
Encode
r
9
Vanilla Autoencoder
• Powerof Latent Representation
• t-SNE visualization on MNIST: PCA vs. Autoencoder
Autoencoder (Winner)
PCA
2006 Science paper by Hinton and
Salakhutdinov
Vanilla Autoencoder
11
• Discussion
•Hidden layer is overcomplete if greater than the input layer
• No compression
• No guarantee that the hidden units extract meaningful feature
Denoising Autoencoder
Visualizing thelearned
features
𝑥1
𝑥2
𝑥3
𝑎1
𝑎2
𝑎3
𝑥4
𝑥5
𝑥6
𝑎4
• Feature Visualization
One neuron == One feature
extractor
reshape
15
16.
Denoising Autoencoder
16
• DenoisingAutoencoder & Dropout
Denoising autoencoder was proposed in 2008, 4 years before the dropout paper (Hinton, et al.
2012). Denoising autoencoder can be seem as applying dropout between the input and the first
layer.
Denoising autoencoder can be seem as one type of data augmentation on the input.
Sparse Autoencoder
• Why?
•Even when the number of hidden units
is large (perhaps even greater than the
number of input pixels), we can still
discover interesting structure, by
imposing other constraints on the network.
• In particular, if we impose a ”‘sparsity”’
constraint on the hidden units,
then the autoencoder will still
discover interesting structure in the data,
even if the number of hidden units is
large.
𝑥1
𝑥2
𝑥3
𝑎1
𝑎2
𝑎3
𝑥4
𝑥5
𝑥6
𝑎4
input layer hidden layer
18
0.02
“inactive”
0.97
“active”
0.01
“inactive”
0.98
“active”
Encoder
Sigmoi
d
20
Sparse Autoencoder
• SparsityRegularization
�
�
1
𝑥2
𝑥3
𝑎1
𝑎2
𝑎3
𝑥4
𝑥5
𝑥6
𝑎4
input layer hidden layer
0.02
“inactive”
0.97
“active”
0.01
“inactive”
0.98
“active”
Encoder
Sigmoi
d
�
� �
�
𝜌^ =
1
$ 𝑎 𝑚=
1
�
�
�
�
Given 𝑴 data samples (batch size) and
Sigmoid activation function, the active ratio of a
neuron 𝑎𝑗:
To make the output “sparse”, we would like to
enforce the following constraint, where 𝜌 is
a “sparsity parameter”, such as 0.2 (20% of
the neurons)
𝜌^𝑗 = 𝜌
The penalty term is as follow, where s is the
number of activation outputs.
j=1
ℒ 𝜌 = ∑𝑠
𝐾𝐿(𝜌||
𝜌^𝑗) j=1 𝜌^ j 1–
𝜌^j
= ∑𝑠
(𝜌log 𝜌
+ (1 − 𝜌)log
1 – 𝜌
)
ℒ𝑡𝑜𝑡𝑎
𝑙
𝑀𝑆
𝐸
= ℒ +
𝜆ℒ
�
�
The total loss:
The number of hidden units can be greater than the number of input variables.
21.
Sparse Autoencoder
• SparsityRegularization Smaller 𝜌 == More
sparse
Autoencoders for MNIST
dataset
21
�
�
�
�
%
Autoencode
r
Sparse
Autoencoder
�
�
%
Inpu
t
22.
Sparse Autoencoder
• Differentregularization loss
ℒ 1 on the hidden activation output
Method Hidden
Activatio
n
Reconstructi
on
Activation
Loss Function
Method 1 Sigmoid Sigmoid ℒ 𝑡𝑜𝑡𝑎𝑙 = ℒ 𝑀𝑆𝐸 + ℒ 𝜌
Method 2 ReLU Softplus ℒ 𝑡𝑜𝑡𝑎𝑙 = ℒ 𝑀𝑆𝐸 + 𝒂
22
23.
Sparse Autoencoder
• SparseAutoencoder vs. Denoising Autoencoder
Feature Extractors of Sparse Autoencoder Feature Extractors of Denoising
Autoencoder
23
24.
Sparse Autoencoder
• Autoencodervs. Denoising Autoencoder vs. Sparse Autoencoder
Autoencoders for MNIST
dataset
24
�
�
�
�
%
Autoencode
r
Sparse
Autoencoder
�
�
%
�
�
%
Inpu
t
Denoising
Autoencoder
Contractive Autoencoder
26
• Why?
•Denoising Autoencoder and Sparse Autoencoder overcome the overcomplete
problem via the input and hidden layers.
• Could we add an explicit term in the loss to avoid uninteresting features?
We wish the features that ONLY reflect variations observed in the training set
27.
Contractive Autoencoder
27
• How
•Penalize the representation being too sensitive to the input
• Improve the robustness to small perturbations
• Measure the sensitivity by the Frobenius norm of the Jacobian matrix of the
encoder activations
Contractive Autoencoder
31
• vs.Denoising Autoencoder
• Advantages
• CAE can better model the distribution of raw data
• Disadvantages
• DAE is easier to implement
• CAE needs second-order optimization (conjugate gradient, LBFGS)
• Vanilla Autoencoder
•Denoising Autoencoder
• Sparse Autoencoder
• Contractive Autoencoder
• Stacked Autoencoder
• Variational Autoencoder (VAE)
• From Neural Network Perspective
• From Probability Model Perspective
40
41.
41
Before we start
•Question?
• Are the previous Autoencoders generative model?
• Recap: We want to learn a probability distribution 𝑝(𝑥) over 𝑥
o Generation (sampling): 𝐱𝑛𝑒r~𝑝(x)
(NO, The compressed latent codes of autoencoders are not prior distributions, autoencoder
cannot learn to represent the data distribution)
o Density Estimation: 𝑝(x) high if 𝐱 looks like a real data
NO
o Unsupervised Representation Learning:
Discovering the underlying structure from the data distribution (e.g., ears, nose, eyes …)
(YES, Autoencoders learn the feature representation)
42.
• Vanilla Autoencoder
•Denoising Autoencoder
• Sparse Autoencoder
• Contractive Autoencoder
• Stacked Autoencoder
• Variational Autoencoder (VAE)
• From Neural Network Perspective
• From Probability Model Perspective
42
43.
43
Variational Autoencoder
• Howto perform generation (sampling)?
𝑥1
𝑥2
𝑥3
𝑧1
𝑧2
𝑧3
𝑥4
𝑥5
𝑧4
𝑥
#
1
𝑥
#
2
𝑥
#
3
𝑥
#
4
𝑥
#
5
input layer hidden layer output layer
Can the hidden output be a prior distribution, e.g., Normal
distribution?
𝑧1
𝑧2
𝑧3
𝑧4
𝑥
#
1
𝑥
#
2
𝑥
#
3
𝑥
#
4
𝑥
#
5
𝑥
#
6
𝑁(0,
1)
Decoder(Generator) maps
𝑁(0, 1) to data
space
Encoder
Decoder
Decode
r
𝑥6 𝑥#6
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
𝑝 𝑋 = ∑2 𝑝 𝑋
𝑍 𝑝(𝑍)
44.
Variational Autoencoder
• QuickOverview
ℒkl
�
�
!
�
�
�
�
ℒ𝑀𝑆
𝐸
Data
Space
𝒙
44
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
Latent
Space
𝑁(0, 1)
Bidirectional
Mapping
ℒ 𝑡𝑜𝑡𝑎𝑙 = ℒ 𝑀𝑆𝐸 +
ℒ 𝑘𝑙
𝑝(𝑥|𝑧)
generation
(decode)
𝑞(𝑧|𝑥)
Inference
(encoder)
45.
Variational Autoencoder
45
Auto-Encoding VariationalBayes. Diederik P. Kingma, Max Welling. ICLR 2013
• The neural net perspective
• A variational autoencoder consists of an encoder, a decoder, and a loss function
Variational Autoencoder
• Lossfunction
regularization
47
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
Can be represented by MSE
48.
48
• Which directionof the KL divergence to
use?
• Some applications require an
approximation that usually places
high probability anywhere that the
true distribution places high
probability: left one
• VAE requires an approximation
that
rarely places high probability
anywhere that the true distribution
places low probability: right one
Variational Autoencoder
• Why KL(Q||P) not KL(P||Q)
If:
49.
Variational Autoencoder
• ReparameterizationTrick
ℎ1
ℎ2
ℎ3
𝜇1
𝜇2
𝜇3
ℎ4
ℎ5
ℎ6
�
�
4
�
�
#
1
𝑥
#
2
𝑥
#
3
𝑥
#
4
𝑥
#
5
𝑥
#
6
𝛿1
𝛿2
𝛿3
𝛿4
𝑧1
𝑧2
𝑧3
𝑧4
Resampling
𝑧𝑖~𝑁(𝜇𝑖,
𝛿𝑖)
predict means
predict std
�
�
1
𝑥2
𝑥3
𝑥4
𝑥5
𝑥6
1. Encode the input
2. Predict means
3. Predict standard derivations
4. Use the predicted means and standard
derivations to sample new latent
variables individually
5. Reconstruct the input
49
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
50.
Variational Autoencoder
• ReparameterizationTrick
• z ~ N(μ, σ) is not differentiable
• To make sampling z differentiable
• z = μ + σ * ϵ ϵ ~ N(0,
1)
50
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
Variational Autoencoder
• Whereis ‘variational’?
53
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
54.
• Vanilla Autoencoder
•Denoising Autoencoder
• Sparse Autoencoder
• Contractive Autoencoder
• Stacked Autoencoder
• Variational Autoencoder (VAE)
• From Neural Network Perspective
• From Probability Model Perspective
54
55.
Variational Autoencoder
Z
= 𝑁(0,1)is a prior/known
distribution
• Problem Definition
Goal: Given 𝑋 = {𝑥1, 𝑥2, 𝑥3 … , 𝑥𝑛}, find 𝑝 𝑋 to
represent 𝑋
How: It is difficult to directly model 𝑝 𝑋 , so
alternatively, we can …
𝑝 𝑋 = D 𝑝 𝑋|𝑍 𝑝(𝑍)
where
𝑝 𝑍
55
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
i.e., sample 𝑋
from 𝑍
56.
Variational Autoencoder
• Theprobability model perspective
• P(X) is hard to model
𝑝
𝑋
= G 𝑝 𝑋|
𝑍 𝑝(𝑍)
4
𝑝
𝑋
= G
𝑝 𝑋, 𝑍
4
• Alternatively, we learn the joint distribution of X and Z
𝑝 𝑋, 𝑍 = 𝑝 𝑍 𝑝(𝑋|𝑍)
56
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
Variational Autoencoder
• MonteCarlo?
• n might need to be extremely large before we have an accurate estimation of P(X)
59
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
60.
Variational Autoencoder
• MonteCarlo?
• Pixel difference is different from perceptual difference
60
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
Variational Autoencoder
• Recap:Variational Inference
• VI turns inference into optimization
ideal
approximation
𝑝(𝑥
)
62
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
𝑝 𝑧 𝑥=
𝑝(𝑥, 𝑧)
∝ 𝑝(𝑥,
𝑧)
63.
Variational Autoencoder
• VariationalInference
• VI turns inference into optimization
parameter distribution
63
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
64.
Variational Autoencoder
• Settingup the objective
• Maximize P(X)
• Set Q(z) to be an arbitrary distribution 𝑝 𝑧 𝑋
=
𝑝 𝑋 𝑧
𝑝(𝑧)
64
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
𝑝(𝑋
)
Goal: maximize this logP(x)
65.
Variational Autoencoder
• Settingup the objective
reconstruction/decoder KLD
Goal: maximize this encoder
ideal
difficult to compute
Goal becomes: optimize this
ℒkl
�
�
!
�
�
�
�
ℒ𝑀𝑆
𝐸
ℒ 𝑡𝑜𝑡𝑎𝑙 = ℒ 𝑀𝑆𝐸 +
ℒ 𝑘𝑙
65
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
𝑝(𝑥|𝑧)
generation
𝑞(𝑧|𝑥)
inference
66.
Variational Autoencoder
• Settingup the objective : ELBO
ideal
encoder
-ELBO
𝑝
𝑧 𝑋
=
𝑝(𝑋, 𝑧)
𝑝(𝑋
)
66
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
67.
Variational Autoencoder
• Settingup the objective : ELBO
67
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
Variational Autoencoder
• VAEis a Generative Model
𝑝 𝑍|𝑋 is not 𝑁(0,1)
Can we input 𝑁(0,1) to the decoder for
sampling? YES: the goal of KL is to make 𝑝 𝑍|𝑋
to be
𝑁(0,1)
72
Auto-Encoding Variational Bayes. Diederik P. Kingma, Max Welling. ICLR 2013
73.
Variational Autoencoder
73
Auto-Encoding VariationalBayes. Diederik P. Kingma, Max Welling. ICLR 2013
• VAE vs. Autoencoder
• VAE : distribution representation, p(z|x) is a distribution
• AE: feature representation, h = E(x) is deterministic
Summary: Take HomeMessage
• Autoencoders learn data representation in an unsupervised/ self-supervised way.
• Autoencoders learn data representation but cannot model the data distribution 𝑝 𝑋
.
• Different with vanilla autoencoder, in sparse autoencoder, the number of hidden units
can be greater than the number of input variables.
• VAE
• …
• …
• …
• …
• …
• …
75