Deep reinforcement learning (RL) has achieved outstanding results in recent years. This has led to a dramatic increase in the number of applications and methods. Recent works have explored learning beyond single-agent scenarios and have considered multiagent learning (MAL) scenarios. Initial results report successes in complex multiagent domains, although there are several challenges to be addressed. The primary goal of this article is to provide a clear overview of current multiagent deep reinforcement learning (MDRL) literature. Additionally, we complement the overview with a broader analysis: (i) we revisit previous key components, originally presented in MAL and RL, and highlight how they have been adapted to multiagent deep reinforcement learning settings. (ii) We provide general guidelines to new practitioners in the area: describing lessons learned from MDRL works, pointing to recent benchmarks, and outlining open avenues of research. (iii) We take a more critical tone raising practical challenges of MDRL (e.g., implementation and computational demands). We expect this article will help unify and motivate future research to take advantage of the abundant literature that exists (e.g., RL and MAL) in a joint effort to promote fruitful research in the multiagent community.

Bibtex

@article{2019JAAMAS,
title={A Survey and Critique of Multiagent Deep Reinforcement Learning},
journal = {Journal of Autonomous Agents and Multiagent Systems},
author = {Pablo Hernandez-Leal and Bilal Kartal and Matthew E. Taylor},
year = {2019},
month = {October},
volume = 33,
issue = 6,
pages = {750–797},
DOI = {https://doi.org/10.1007/s10458-019-09421-1}
}

Related Research