Tutorial #1: bias and fairness in AI

Author(s): S. Prince 

This tutorial discusses how bias can be introduced into the machine learning pipeline, what it means for a decision to be fair, and methods to remove bias and ensure fairness. As machine learning algorithms are increasingly used to determine important real-world outcomes such as loan approval, pay rates, and parole decisions, it is incumbent on the AI community to minimize unintentional unfairness and discrimination.

Further Reading

Tutorial #2: few-shot learning and meta-learning I

Author(s): W. Zi, L. S. GhoraieS. Prince

This tutorial describes few-shot and meta-learning problems and introduces a classification of methods. We also discuss methods that use a series of training tasks to learn prior knowledge about the similarity and dissimilarity of classes that can be exploited for future few-shot tasks. 

Further Reading:

Tutorial #3: few-shot learning and meta-learning II

Author(s): W. Zi, L. S. GhoraieS. Prince

In part II of our tutorial on few-shot and meta-learning we discuss methods that incorporate prior knowledge about how to learn models, and that incorporate prior knowledge about the data itself. These include three distinct approaches ‚Äúlearning to initialize”, “learning to optimize” and “sequence methods”. 

Further Reading:

Tutorial #4: auxiliary tasks in deep reinforcement learning

Author(s): P. Hernandez-LealB. Kartal, M. E. Taylor

This tutorial focuses on the use of auxiliary tasks to improve the speed of learning in the context of deep reinforcement learning (RL). Auxiliary tasks are additional tasks that are learned simultaneously with the main RL goal and that generate a more consistent learning signal. The system uses these signals to learn a shared representation and hence speed up the progress on the main RL task. Additionally, examples from a variety of domains are explored. 

Further Reading:

Tutorial #5: variational autoencoders

Author(s): S. Prince 

Tutorial #8: Bayesian optimization

Author(s): M. O. AhmedS. Prince

In this tutorial, we dive into Bayesian optimization, its key components, and applications. Optimization is at the heart of machine learning; Bayesian optimization specifically is a framework that can deal with many optimization problems that will be discussed. The core idea is to build a model of the entire function that we are optimizing. This model includes both our current estimate of that function and the uncertainty around that estimate. By considering this model, we can choose where next to sample the function. Then we update the model based on the observed sample. This process continues until we are sufficiently certain of where the best point on the function is.

Further Reading: