Discovering variational autoencoders
Alexandros Kalousis
In this capsule we give a high level view of Variational Autoencoders (VAEs), a particular family of generative models that consists of an encoder mapping instances to a latent space and a decoding component which receives input samples from the latent space and maps them to the original input space. The encoding-decoding architecture of VAEs allows for several interesting applications, such as conditional generation and style transfer. In addition the presence of a decoder allows us to easily incorporate domain knowledge such as physics laws grounding the semantics of the latent space to real world entities. We provide a small example on gait modelling.
Training Suggestion
Course excerpts
"Variational autoencoders"
Session from the course "Deep learning", available here. Slides available here.
For more information