Canvas fail!
Fundamental Research 2017-11-26T21:13:42+00:00

Reinforcement Learning

 

Reinforcement learning is a powerful framework for solving sequential decision tasks, including Atari games, robot control, and data center optimization. At Borealis AI, we’re working on fundamental improvements to the framework by incorporating human knowledge — although our goal is often super-human performance, leveraging a user’s abilities can help us learn faster and better across a wide range of tasks.

An additional, related goal is to learn directly from human feedback. Video games have a score to directly maximize. But how should your banking app work to help you? What should the program try to optimize? Our goal is to learn from both explicit and implicit user feedback to our decisions. Once successful, programs will be able to better understand what a given user finds useful and autonomously maximize user satisfaction.

Combinatorial Optimization

 

With applications ranging from the travelling salesman problem to clustering and scheduling, the field of combinatorial optimization involves discrete variable problems that have exponential complexity (NP hard) and are pervasive in our everyday lives. At Borealis AI, we’re interested in developing new and more efficient approaches to obtain better polynomial time-approximate solutions to these problems.

Specifically, we’re looking into developing methods that are rooted in two areas of machine learning: graphical models and reinforcement learning.

In the first area, casting a combinatorial problem as inference on a graphical model allows us to break the problem down into a set of smaller, partially-overlapping, local problems for computational speed up. Then, we alternate between solving these local subsets and exchanging their solutions where the subsets overlap to more quickly gain an approximate answer to the original problem.

From an RL standpoint, the goal in designing reward functions for combinatorial problems is twofold: i) avoiding handcrafted branching heuristics, which are often instance-specific and do not transfer over well from one combinatorial problem to another; and ii) finding solutions to particularly hard instances of these problems where the solution space is sparse and disjointed.

Adversarial Machine Learning

 

Adversarial machine learning happens when model outputs are manipulated by carefully constructed inputs. This phenomenon opens up potential security risks via adversarial example attacks but, if leveraged correctly, also provides an interesting and powerful way to train machine learning models. In some sense, adversarial attacks and adversarial training are the two sides of the same coin.

As a security risk, whenever a machine learning system takes input from users or the physical world – such as machine-learning-as-a-service (MLaaS) or computer vision recognition in autonomous vehicles – an adversary can feed the model carefully perturbed data that appear to be legitimate but trigger unexpected behaviour from the machine learning system.

However, if the induced error is leveraged correctly, adversarial training, and – in particular, generative adversarial nets (GANs) – provide a new approach for learning . We typically train ML models by minimizing an objective with respect to their model parameters. This objective function is usually hand-designed to capture the way a model’s outputs deviate from the correct outputs. Coming up with a good objective function is a challenging research problem in itself and has traditionally required knowledge both about the data and the problem to be solved. GANs remove or reduce the need for humans to engineer the objective function; instead, the system learns the objective function from the data. This has great potential impact on the field as it automates another part of the data-processing pipeline.

At Borealis AI we are actively contributing to both sub-areas of adversarial machine learning. We believe that adversarial perturbations are a unique probe for better fundamental understanding of ML models. For example, we don’t yet understand what causes these adversarial examples and why adversarial examples can often be usefully transferred between models. With regards to adversarial training, our research interests are twofold: i) deepening theoretical understanding of GANs with the hope to improve their stability and solution; and ii) applying GANs onto novel data domains and exploiting adversarial training for better learning and inference in other ML models.

Bitnami