Trading markets represent a real-world financial application to deploy reinforcement learning agents, however, they carry hard fundamental challenges such as high variance and costly exploration. Moreover, markets are inherently a multiagent domain composed of many actors taking actions and changing the environment. To tackle these type of scenarios agents need to exhibit certain characteristics such as risk-awareness, robustness to perturbations and low learning variance. We take those as building blocks and propose a family of four algorithms. First, we contribute with two algorithms that use risk-averse objective functions and variance reduction techniques. Then, we augment the framework to multi-agent learning and assume an adversary which can take over and perturb the learning process. Our third and fourth algorithms perform well under this setting and balance theoretical guarantees with practical use. Additionally, we consider the multi-agent nature of the environment and our work is the first one extending empirical game theory analysis for multi-agent learning by considering risk-sensitive payoffs.

Authors
* Denotes equal
contribution
BibTeX

@inproceedings{gao2021robust,
    title={{Robust Risk-Sensitive Reinforcement Learning Agents for Trading Markets}},
    author={Yue Gao and Kry Yik Chau Lui and Pablo Hernandez-Leal},
    year={2021},
    booktitle={Reinforcement Learning for Real Life (RL4RealLife) Workshop at ICML},
}