A Binarized Representation Entropy (BRE) regularizer to diversify learning signals in GAN
Our novel Binarized Representation Entropy (BRE) regularizer directly encourages pairs of points in a mini-batch to have de-correlated activation patterns (or as much as possible while still performing the main discriminator/critic task). As a bonus, BRE can control where model capacity is allocated in the input data space, unlike global model capacity regularizers such as dropout or weight decay that limit the model’s overall complexity.
D should communicate to G the different ways in which fake points are wrong [i.e., learning signals for G should be diverse.]
In the above illustration, semi-transparent mosaic glass represents the vector field ∇xD(x). Only the parts above the fake data points (white) pass transport information to G. When D models large input regions coarsely (left), G cannot move its mass accurately.