loading page

How it works

As a reinforcement learning (RL) algorithm, Aiden develops an order execution strategy dynamically, by adapting to constantly shifting market conditions and by balancing long-term goals against short-term opportunities. As modern financial markets become increasingly more sophisticated, this adaptability allows Aiden to adjust within a constantly shifting landscape.

The potential of RL in the real world was demonstrated in 2016 when it was applied in gaming through Google’s AlphaGo program that beat the human world champion in the ancient game of Go.

In RL approaches, an agent is trained to take actions that maximize their expected reward (often during long horizons) instead of training a system to match recorded answers. For example, in the game of chess, a reward might be the immediate removal of an enemy piece, or the final game state of victory or loss via a checkmate. In the case of Aiden, this methodology allows us to optimize for the actions which the agent believes will result in the best possible order execution.

RBC Capital Markets’ Aiden VWAP algorithm is the first foundational step in Aiden’s evolution. Current research at Borealis AI aims to keep Aiden on the forefront of ML advances and we believe there are many possibilities for how we can expand Aiden’s application to other trading strategies and asset classes.

You can find more information about the research behind Aiden here.

To learn more about how we use Aiden in RBC Capital Markets please visit rbccm.com/aiden

a decorative triangular grid of dots with connecting lines graphic
View All

Related Publications

Multi Type Mean Field Reinforcement Learning
International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2020
Authors: S. Subramanian, P. Poupart, M. E. Taylor, N. Hegde
Uncertainty-Aware Action Advising for Deep Reinforcement Learning Agents
Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020
Authors: F. L. Da Silva, P. Hernandez-Leal , B. Kartal, M. E. Taylor

Related Content