Variational autoencoder

Variational autoencoder

In machine learning, a variational autoencoder (VAE), is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling, belonging to the families of probabilistic graphical models and variational Bayesian methods. Although this type of model was initially designed for unsupervised learning, its effectiveness has been proven for semi-supervised learning and supervised learning.

Comment
enIn machine learning, a variational autoencoder (VAE), is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling, belonging to the families of probabilistic graphical models and variational Bayesian methods. Although this type of model was initially designed for unsupervised learning, its effectiveness has been proven for semi-supervised learning and supervised learning.
Cs1Dates
eny
Date
enJune 2021
Depiction
Reparameterization Trick.png
Reparameterized Variational Autoencoder.png
VAE Basic.png
Has abstract
enIn machine learning, a variational autoencoder (VAE), is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling, belonging to the families of probabilistic graphical models and variational Bayesian methods. Variational autoencoders are often associated with the autoencoder model because of its architectural affinity, but with significant differences in the goal and mathematical formulation. Variational autoencoders are probabilistic generative models that require neural networks as only a part of their overall structure, as e.g. in VQ-VAE. The neural network components are typically referred to as the encoder and decoder for the first and second component respectively. The first neural network maps the input variable to a latent space that corresponds to the parameters of a variational distribution. In this way, the encoder can produce multiple different samples that all come from the same distribution. The decoder has the opposite function, which is to map from the latent space to the input space, in order to produce or generate data points. Both networks are typically trained together with the usage of the reparameterization trick, although the variance of the noise model can be learned separately. Although this type of model was initially designed for unsupervised learning, its effectiveness has been proven for semi-supervised learning and supervised learning.
Is primary topic of
Variational autoencoder
Label
enVariational autoencoder
Link from a Wikipage to another Wikipage
Artificial neural network
Autoencoder
Backpropagation
Category:Bayesian statistics
Category:Dimension reduction
Category:Graphical models
Category:Neural network architectures
Category:Supervised learning
Category:Unsupervised learning
Chain rule (probability)
Cholesky decomposition
Cross entropy
Data augmentation
Deep learning
Evidence lower bound
File:Reparameterization Trick.png
File:Reparameterized Variational Autoencoder.png
File:VAE Basic.png
Gaussian distribution
Generative adversarial network
Gradient descent
Graphical model
Joint distribution
Kullback–Leibler divergence
Machine learning
Marginal distribution
Max Welling
Mean squared error
Random number generation
Representation learning
Semi-supervised learning
Sparse dictionary learning
Stochastic gradient descent
Supervised learning
Unsupervised learning
Variational Bayesian methods
SameAs
Auto-encodeur variationnel
DBZzs
Q97311562
Variational autoencoder
Варіаційний автокодувальник
変分オートエンコーダー
Subject
Category:Bayesian statistics
Category:Dimension reduction
Category:Graphical models
Category:Neural network architectures
Category:Supervised learning
Category:Unsupervised learning
Thumbnail
VAE Basic.png?width=300
WasDerivedFrom
Variational autoencoder?oldid=1123281523&ns=0
WikiPageLength
20819
Wikipage page ID
62078649
Wikipage revision ID
1123281523
WikiPageUsesTemplate
Template:Differentiable computing
Template:Div col
Template:Div col end
Template:Machine learning bar
Template:Main
Template:Reflist
Template:Short description
Template:Use dmy dates