ML models often operate within the context of a larger system that can adapt its response when the ML model is uncertain, such as falling back on safe defaults or a human in the loop. This commonly encountered operational context calls for principled techniques for training ML models with the option to abstain from predicting when uncertain. Selective neural networks are trained with an integrated option to abstain, allowing them to learn to recognize and optimize for the subset of the data distribution for which confident predictions can be made. However, optimizing selective networks is challenging due to the non-differentiability of the binary selection function (the discrete decision of whether to predict or abstain). This paper presents a general method for training selective networks that leverages the Gumbel-softmax reparameterization trick to enable selection within an end-to-end differentiable training framework. Experiments on public datasets demonstrate the potential of Gumbel-softmax selective networks for selective regression and classification.
Bibtex
@misc{https://doi.org/10.48550/arxiv.2211.10564,
doi = {10.48550/ARXIV.2211.10564},
url = {https://arxiv.org/abs/2211.10564},
author = {Salem, Mahmoud and Ahmed, Mohamed Osama and Tung, Frederick and Oliveira, Gabriel},
keywords = {Machine Learning (cs.LG), Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Gumbel-Softmax Selective Networks},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
Related Research
-
Towards Better Selective Classification
Towards Better Selective Classification
L. Feng, M. O. Ahmed, H. Hajimirsadeghi, and A. Abdi. International Conference on Learning Representations (ICLR)
Publications
-
Training a Vision Transformer from scratch in less than 24 hours with 1 GPU
Training a Vision Transformer from scratch in less than 24 hours with 1 GPU
S. Irandoust, T. Durand, Y. Rakhmangulova, W. Zi, and H. Hajimirsadeghi. Workshop at Conference on Neural Information Processing Systems (NeurIPS)
Publications
-
RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression
RankSim: Ranking Similarity Regularization for Deep Imbalanced Regression
Y. Gong, G. Mori, and F. Tung. International Conference on Machine Learning (ICML)
Publications