Recently there is an increasing interest in scene generation within the research community. However, scene layouts are largely being modeled in deterministic fashion, ignoring any plausible visual variations given the same textual description as input. We propose LayoutVAE, a variational autoencoder based framework for generating stochastic scene layouts. LayoutVAE is a versatile modeling framework that allows for generating full image layouts given a label set, or per label layouts for an existing image given a new label. In addition, it is also capable of detecting unusual layouts, potentially providing a way to evaluate layout generation problem. Extensive experiments on MNIST-Layouts and challenging COCO 2017 Panoptic dataset verifies the effectiveness of our proposed framework.

BibTeX

@inproceedings{JyothiDHSM19,
  title = {LayoutVAE: Stochastic Scene Layout Generation from a Label Set},
  author = {Akash Abdu Jyothi and Thibaut Durand and Jiawei He and Leonid Sigal and Greg Mori},
  booktitle = {International Conference on Computer Vision (ICCV)},
  year = 2019,
}