Current language generation models suffer from issues such as repetition, incoherence, and hallucinations. An often-repeated hypothesis is that this brittleness of generation models is caused by the training and the generation procedure mismatch, also referred to as exposure bias. In this paper, we verify this hypothesis by analyzing exposure bias from an imitation learning perspective. We show that exposure bias leads to an accumulation of errors, analyze why perplexity fails to capture this accumulation, and empirically show that this accumulation results in poor generation quality. Source code to reproduce these experiments is available at this https URL

Bibtex

@misc{https://doi.org/10.48550/arxiv.2204.01171,
  doi = {10.48550/ARXIV.2204.01171},
  
  url = {https://arxiv.org/abs/2204.01171},
  
  author = {Arora, Kushal and Asri, Layla El and Bahuleyan, Hareesh and Cheung, Jackie Chi Kit},
  
  keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
  
  title = {Why Exposure Bias Matters: An Imitation Learning Perspective of Error Accumulation in Language Generation},
  
  publisher = {arXiv},
  
  year = {2022},
  
  copyright = {Creative Commons Attribution 4.0 International}
}