Unsupervised multi-object scene decomposition is a fast-emerging problem in representation learning. Despite significant progress in static scenes, such models are unable to leverage important dynamic cues present in videos. We propose PROVIDE, a novel unsupervised framework for PRObabilistic VIdeo DEcomposition based on a temporal extension of iterative inference. PROVIDE is powerful enough to jointly model complex individual multi-object representations and explicit temporal dependencies between latent variables across frames. This is achieved by leveraging 2D-LSTM, temporally conditioned inference and generation within the iterative amortized inference for posterior refinement. Our method improves the overall quality of decompositions, encodes information about the objects’ dynamics, and can be used to predict trajectories of each object separately. Additionally, we show that our model has a high accuracy even without color information. We demonstrate the decomposition capabilities of our model and show that it outperforms the state-of-the-art on several benchmark datasets, one of which was curated for this work and will be made publicly available.

Bibtex


@InProceedings{pmlr-v161-zablotskaia21a,
  title =  {PROVIDE: a probabilistic framework for unsupervised video decomposition},
  author =       {Zablotskaia, Polina and Dominici, Edoardo A. and Sigal, Leonid and Lehrmann, Andreas M.},
  booktitle =  {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence},
  pages =  {2019–2028},
  year =  {2021},
  editor =  {de Campos, Cassio and Maathuis, Marloes H.},
  volume =  {161},
  series =  {Proceedings of Machine Learning Research},
  month =  {27–30 Jul},
  publisher =    {PMLR},
  pdf =  {https://proceedings.mlr.press/v161/zablotskaia21a/zablotskaia21a.pdf},
  url =  {https://proceedings.mlr.press/v161/zablotskaia21a.html},