Why p(z) is normal gaussian in Variational autoencoder?

648 views Asked by At

In variational autoencoder, objective function has two terms, the one which makes input and output x same, and the other one regularizer, q(z) and p(z) to be close by KL divergence. What I doN't understand is why we can assume that p(z)~Normal Gaussian with 0 mean and 1 variance?

Why not say..variance less than 1? so that more informationn is condensed with narrower gaussians in hidden layer?

Thank you

1

There are 1 answers

0
Andrea Asperti On

Provided the network is sufficiently powerful to synthesize complex functions, the shape of the prior should be - in theory - largely uninfluent. In the specific case of the variance of the Gaussian you take as prior, the network may easily adapt to a different variance by scaling the relevant statistics of the posterior distributions Q(z|X), and suitably rescaling sampling in the next layer of the network. The resulting network would have precisely the same behaviour (and loss) of the previous one. So, the variance of the prior Gaussian has just the role of fixing the unit of measure for the latent space. The topic is discussed in the excellent tutorial on Variational Autoencoders by Doersh (Section 2.4.3); you might also be interested to have a look at my blog.