Variational Autoencoders (VAEs) are a type of autoencoder that have gained popularity for their generative capabilities. Unlike traditional autoencoders, VAEs encode inputs as probability distributions over a latent space, allowing for the generation of new content. Classic autoencoders, on the other hand, encode inputs as

15m read timeFrom ai.plainenglish.io
Post cover image
Table of contents
About normal autoencodersTypes of traditional autoencodersWhat are normal autoencoders used for?The Content Generation problemHow classic autoencoders map input to the latent spaceContinuity and completenessNormal encoders don’t work hereWhy does this happen?Say hello to Variational Autoencoders (VAEs)How are VAEs different from traditional autoencoders?First difference: encodings are probability distributionsSecond difference: KL divergence + reconstruction error for optimizationSummaryReferences

Sort: