Learning by contrasting positive and negative samples is a general strategy adopted by many methods. Noise contrastive estimation (NCE) for word embeddings and translating embeddings for knowledge graphs are examples in NLP employing this approach. In this work, we view contrastive learning as an abstraction of all such methods and augment the negative sampler into a mixture distribution containing an adversarially learned sampler. The resulting adaptive sampler finds harder negative examples, which forces the main model to learn a better representation of the data. We evaluate our proposal on learning word embeddings, order embeddings and knowledge graph embeddings and observe both faster convergence and improved results on multiple metrics.
Bibtex
@InProceedings{Bose2018ACE,
Title = {Adversarial Contrastive Estimation},
Author = {Avishek Joey Bose and Huan Ling and Yanshuai Cao},
booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Long Papers)},
Url = {https://arxiv.org/abs/1805.03642},
Year = {2018}
}
Related Research
-
Borealis AI at International Conference on Learning Representations (ICLR): Machine Learning for a better financial future
Borealis AI at International Conference on Learning Representations (ICLR): Machine Learning for a better financial future
Learning And Generalization; Natural Language Processing; Time series Modelling
Research
-
Few-Shot Learning & Meta-Learning | Tutorial
Few-Shot Learning & Meta-Learning | Tutorial
W. Zi, L. S. Ghoraie, and S. Prince.
Research
-
Self-supervised Learning in Time-Series Forecasting — A Contrastive Learning Approach
Self-supervised Learning in Time-Series Forecasting — A Contrastive Learning Approach
T. Sylvain, L. Meng, and A. Lehrmann.
Research