We study the application of min-max propagation, a variation of belief propagation, for approximate min-max inference in factor graphs. We show that for “any” high-order function that can be minimized in O(ω), the min-max message update can be obtained using an efficient O(K(ω + log(K)) procedure, where K is the number of variables. We demonstrate how this generic procedure, in combination with efficient updates for a family of high-order constraints, enables the application of min-max propagation to efficiently approximate the NP-hard problem of makespan minimization, which seeks to distribute a set of tasks on machines, such that the worst case load is minimized.
Bibtex
@InProceedings{SrinivasaMMP,
Title = {Min-Max Propagation},
Author = {Christopher Srinivasa and Inmar Givoni and Siamak Ravanbahksh and Brendan J. Frey},
Year = {2017},
Abstract = {We study the application of min-max propagation, a variation of belief propagation, for approximate min-max inference in factor graphs. We show that for any high-order function that can be minimized in O(ω), the min-max message update can be obtained using an efficient O(K(ω + log(K)) procedure, where K is the number of variables. We demonstrate how this generic procedure, in combination with efficient updates for a family of high-order constraints, enables the application of min-max propagation to efficiently approximate the NP-hard problem of makespan minimization, which seeks to distribute a set of tasks on machines, such that theworst case load is minimized.},
Journal = {NIPS},
Url = {https://papers.nips.cc/paper/7140-min-max-propagation}
}
Related Research
-
Self-supervised Learning in Time-Series Forecasting — A Contrastive Learning Approach
Self-supervised Learning in Time-Series Forecasting — A Contrastive Learning Approach
T. Sylvain, L. Meng, and A. Lehrmann.
Research
-
Efficient CDF Approximations for Normalizing Flows
Efficient CDF Approximations for Normalizing Flows
C.S. Sastry, A. Lehrmann, M. Brubaker, and A. Radovic. Transactions on Machine Learning Research (TMLR)
Publications
-
PUMA: Performance Unchanged Model Augmentation for Training Data Removal
PUMA: Performance Unchanged Model Augmentation for Training Data Removal
G. Wu, M. Hashemi, and C. Srinivasa. Association for the Advancement of Artificial Intelligence (AAAI)
Publications