In this work, we propose a practical scheme to enforce monotonicity in neural networks with respect to a given subset of the dimensions of the input space. The proposed approach focuses on the setting where point-wise gradient penalties are used as a soft constraint alongside the empirical risk during training. Our results indicate that the choice of the points employed for computing such a penalty defines the regions of the input space where the desired property is satisfied. As such, previous methods result in models that are monotonic either only at the boundaries of the input space or in the small volume where training data lies. Given this, we propose an alternative approach that uses pairs of training instances and random points to create mixtures of points that lie inside and outside of the convex hull of the training sample. Empirical evaluation carried out using different datasets show that the proposed approach yields predictors that are monotonic in a larger volume of the space compared to previous methods. Our approach does not introduce relevant computational overhead leading to an efficient procedure that consistently achieves the best performance amongst all alternatives.

Bibtex

@inproceedings{
anonymous2021not,
title={Not Too Close and Not Too Far:  Enforcing Monotonicity Requires Penalizing The Right Points},
author={Anonymous},
booktitle={eXplainable AI approaches for debugging and diagnosis.},
year={2021},
url={https://openreview.net/forum?id=xdFqKVlDHnY}
}

Related Research