We study adversarial robustness of neural networks from a margin maximization perspective, where margins are defined as the distances from inputs to a classifier’s decision boundary.
Our study shows that maximizing margins can be achieved by minimizing the adversarial loss on the decision boundary at the “shortest successful perturbation”, demonstrating a close connection between adversarial losses and the margins. We propose Max-Margin Adversarial (MMA) training to directly maximize the margins to achieve adversarial robustness.
Instead of adversarial training with a fixed $\epsilon$, MMA offers an improvement by enabling adaptive selection of the “correct” $\epsilon$ as the margin individually for each datapoint. In addition, we rigorously analyze adversarial training with the perspective of margin maximization, and provide an alternative interpretation for adversarial training, maximizing either a lower or an upper bound of the margins. Our experiments empirically confirm our theory and demonstrate MMA training’s efficacy on the MNIST and CIFAR10 datasets w.r.t. $\ell_\infty$ and $\ell_2$ robustness.
Bibtex
@inproceedings{
ding2020mma,
title={{\{}MMA{\}} Training: Direct Input Space Margin Maximization through Adversarial Training},
author={Gavin Weiguang Ding and Yash Sharma and Kry Yik Chau Lui and Ruitong Huang},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=HkeryxBtPB}
}
Related Research
-
Borealis AI at International Conference on Learning Representations (ICLR): Machine Learning for a better financial future
Borealis AI at International Conference on Learning Representations (ICLR): Machine Learning for a better financial future
Learning And Generalization; Natural Language Processing; Time series Modelling
Research
-
Few-Shot Learning & Meta-Learning | Tutorial
Few-Shot Learning & Meta-Learning | Tutorial
W. Zi, L. S. Ghoraie, and S. Prince.
Research
-
Self-supervised Learning in Time-Series Forecasting — A Contrastive Learning Approach
Self-supervised Learning in Time-Series Forecasting — A Contrastive Learning Approach
T. Sylvain, L. Meng, and A. Lehrmann.
Research