In this paper, we propose an arbitrarily-conditioned data imputation framework built upon variational autoencoders and normalizing flows. The proposed model is capable of mapping any partial data to a multi-modal latent variational distribution. Sampling from such a distribution leads to stochastic imputation. Preliminary evaluation on MNIST dataset shows promising stochastic imputation conditioned on partial images as input.
Bibtex
@InProceedings{Carvahlo2019,
title ={{Arbitrarily-conditioned Data Imputation}},
author ={Carvalho, Micael and Durand, Thibaut and He, Jiawei and Mehrasa, Nazanin and Mori, Greg},
booktitle ={Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference},
year ={2019},
url = {https://openreview.net/forum?id=r1eP5khVKB},
Related Research
-
Why Exposure Bias Matters: An Imitation Learning Perspective of Error Accumulation in Language Generation
Why Exposure Bias Matters: An Imitation Learning Perspective of Error Accumulation in Language Generation
K. Arora, L. El Asri, H. Bahuleyan, and J. Chi Kit Cheung. Association for Computational Linguistics (ACL)
Publications
-
Our NeurIPS 2021 Reading List
Our NeurIPS 2021 Reading List
Y. Cao, K. Y. C. Lui, T. Durand, J. He, P. Xu, N. Mehrasa, A. Radovic, A. Lehrmann, R. Deng, A. Abdi, M. Schlegel, and S. Liu.
Computer Vision; Data Visualization; Graph Representation Learning; Learning And Generalization; Natural Language Processing; Optimization; Reinforcement Learning; Time series Modelling; Unsupervised Learning
Research
-
Variational Selective Autoencoder: Learning from Partially-Observed Heterogeneous Data
Variational Selective Autoencoder: Learning from Partially-Observed Heterogeneous Data
Y. Gong, H. Hajimirsadeghi, J. He, T. Durand, and G. Mori. International Conference on Artificial Intelligence and Statistics (AISTATS)
Publications