We propose a novel regularizer to improve the training of Generative Adversarial Networks (GANs). The motivation is that when the discriminator D spreads out its model capacity in the right way, the learning signals given to the generator G are more informative and diverse, which helps G to explore better and discover the real data manifold while avoiding large unstable jumps due to the erroneous extrapolation made by D. Our regularizer guides the rectifier discriminator D to better allocate its model capacity, by encouraging the binary activation patterns
on selected internal layers of D to have a high joint entropy. Experimental results on both synthetic data and real datasets demonstrate improvements in stability
and convergence speed of the GAN training, as well as higher sample quality.
The approach also leads to higher classification accuracies in semi-supervised learning.

BibTeX

@Article{Cao2018Improving,
Title = {Improving GAN Training via Binarized Representation Entropy (BRE) Regularization},
Author = {Yanshuai Cao and Gavin Weiguang Ding and Kry Yik-Chau Lui and Ruitong Huang},
Journal = {ICLR},
Year = {2018},
Url = {https://openreview.net/forum?id=BkLhaGZRW},
Note = {accepted as poster}
}