Canvas fail!
Publications 2017-11-25T21:02:41+00:00

Publications


  1. C. Srinivasa, I. Givoni, S. Ravanbahksh, and B. J. Frey
    Min-Max Propagation.
    NIPS , 2017.
    paper  |  abstract  |  bibtex

    We study the application of min-max propagation, a variation of belief propagation, for approximate min-max inference in factor graphs. We show that for “any” high-order function that can be minimized in O(ω), the min-max message update can be obtained using an efficient O(K(ω + log(K)) procedure, where K is the number of variables. We demonstrate how this generic procedure, in combination with efficient updates for a family of high-order constraints, enables the application of min-max propagation to efficiently approximate the NP-hard problem of makespan minimization, which seeks to distribute a set of tasks on machines, such that theworst case load is minimized.

    @InProceedings{SrinivasaMMP,
    Title = {Min-Max Propagation},
    Author = {Christopher Srinivasa and Inmar Givoni and Siamak Ravanbahksh and Brendan J. Frey},
    Year = {2017},
    Abstract = {We study the application of min-max propagation, a variation of belief propagation, for approximate min-max inference in factor graphs. We show that for “any” high-order function that can be minimized in O(ω), the min-max message update can be obtained using an efficient O(K(ω + log(K)) procedure, where K is the number of variables. We demonstrate how this generic procedure, in combination with efficient updates for a family of high-order constraints, enables the application of min-max propagation to efficiently approximate the NP-hard problem of makespan minimization, which seeks to distribute a set of tasks on machines, such that theworst case load is minimized.},
    Journal = {NIPS},
    Url = {https://papers.nips.cc/paper/7140-min-max-propagation}
    }

  2. Y. Cao and L. Wang
    Automatic Selection of t-SNE Perplexity.
    ICML Workshop on AutoML , 2017.
    paper  |  abstract  |  bibtex

    t-Distributed Stochastic Neighbor Embedding (t-SNE) is one of the most widely used dimensionality reduction methods for data visualization, but it has a perplexity hyperparameter that requires manual selection. In practice, proper tuning of t-SNE perplexity requires users to understand the inner working of the method as well as to have hands-on experience. We propose a model selection objective for t-SNE perplexity that requires negligible extra computation beyond that of the t-SNE itself. We empirically validate that the perplexity settings found by our approach are consistent with preferences elicited from human experts across a number of datasets. The similarities of our approach to Bayesian information criteria (BIC) and minimum description length (MDL) are also analyzed.

    @Conference{CaoAST,
    Title = {Automatic Selection of t-SNE Perplexity},
    Author = {Yanshuai Cao and Luyu Wang},
    Year = {2017},
    Abstract = {t-Distributed Stochastic Neighbor Embedding (t-SNE) is one of the most widely used dimensionality reduction methods for data visualization, but it has a perplexity hyperparameter that requires manual selection. In practice, proper tuning of t-SNE perplexity requires users to understand the inner working of the method as well as to have hands-on experience. We propose a model selection objective for t-SNE perplexity that requires negligible extra computation beyond that of the t-SNE itself. We empirically validate that the perplexity settings found by our approach are consistent with preferences elicited from human experts across a number of datasets. The similarities of our approach to Bayesian information criteria (BIC) and minimum description length (MDL) are also analyzed.},
    Journal = {ICML Workshop on AutoML},
    Url = {http://arxiv.org/abs/1708.03229}
    }

  3. K. Y. C. Lui, Y. Cao, M. Gazeau, and K. S. Zhang
    Implicit Manifold Learning on Generative Adversarial Networks.
    ICML Workshop on Implicit Models , 2017.
    paper  |  abstract  |  bibtex

    This paper raises an implicit manifold learning perspective in Generative Adversarial Networks (GANs), by studying how the support of the learned distribution, modelled as a submanifold θ, perfectly match with r, the support of the real data distribution. We show that optimizing Jensen-Shannon divergence forces θ to perfectly match with r, while optimizing Wasserstein distance does not. On the other hand, by comparing the gradients of the Jensen-Shannon divergence and the Wasserstein distances (W1 and W22) in their primal forms, we conjecture that Wasserstein W22 may enjoy desirable properties such as reduced mode collapse. It is therefore interesting to design new distances that inherit the best from both distances.

    @Conference{LuiIML,
    Title = {Implicit Manifold Learning on Generative Adversarial Networks},
    Author = {Kry Yik Chau Lui and Yanshuai Cao and Maxime Gazeau and Kelvin Shuangjian Zhang},
    Year = {2017},
    Abstract = {This paper raises an implicit manifold learning perspective in Generative Adversarial Networks (GANs), by studying how the support of the learned distribution, modelled as a submanifold θ, perfectly match with r, the support of the real data distribution. We show that optimizing Jensen-Shannon divergence forces θ to perfectly match with r, while optimizing Wasserstein distance does not. On the other hand, by comparing the gradients of the Jensen-Shannon divergence and the Wasserstein distances (W1 and W22) in their primal forms, we conjecture that Wasserstein W22 may enjoy desirable properties such as reduced mode collapse. It is therefore interesting to design new distances that inherit the best from both distances.},
    Journal = {ICML Workshop on Implicit Models},
    Url = {https://arxiv.org/abs/1710.11260}
    }

Bitnami