In this paper, we propose an arbitrarily-conditioned data imputation framework built upon variational autoencoders and normalizing flows. The proposed model is capable of mapping any partial data to a multi-modal latent variational distribution. Sampling from such a distribution leads to stochastic imputation. Preliminary evaluation on MNIST dataset shows promising stochastic imputation conditioned on partial images as input.
Bibtex
@InProceedings{Carvahlo2019,
title ={{Arbitrarily-conditioned Data Imputation}},
author ={Carvalho, Micael and Durand, Thibaut and He, Jiawei and Mehrasa, Nazanin and Mori, Greg},
booktitle ={Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference},
year ={2019},
url = {https://openreview.net/forum?id=r1eP5khVKB},
Related Research
-
Everything old is new again: Tackling classical problems with modern techniques at AAAI 2019
Graph Representation Learning; Unsupervised Learning
Research
-
Tutorial #5: variational autoencoders
Unsupervised Learning
Research
-
Our NeurIPS 2021 Reading List
Computer Vision; Data Visualization; Graph Representation Learning; Learning And Generalization; Natural Language Processing; Optimization; Reinforcement Learning; Time series Modelling; Unsupervised Learning
Research