Despite promising progress on unimodal data imputation (e.g. image inpainting), models for multimodal data imputation are far from satisfactory. In this work, we propose variational selective autoencoder (VSAE) for this task. Learning only from partially-observed data, VSAE can model the joint distribution of observed/unobserved modalities and the imputation mask, resulting in a unified model for various down-stream tasks including data generation and imputation. Evaluation on synthetic high-dimensional and challenging low-dimensional multimodal datasets shows improvement over the state-of-the-art imputation models.


title = {Variational Selective Autoencoder},
author = {Gong, Yu and Hajimirsadeghi, Hossein and He, Jiawei and Nawhal, Megha and Durand, Thibaut and Mori, Greg},
booktitle = {Proceedings of The 2nd Symposium on
Advances in Approximate Bayesian Inference},
year = {2020},
editor = {Zhang, Cheng and Ruiz, Francisco and Bui, Thang and Dieng, Adji Bousso and Liang, Dawen},
volume = {118},
series = {Proceedings of Machine Learning Research},
publisher = {PMLR},

Related Research