Authors: S. Prince
A woman on a yellow bean bag looking at a laptop, with a stream of mathematic equations behind her. There are two plants beside her.

Two circles overlapping to create an isolated middle space.

Tutorial #1: bias and fairness in AI

Author(s): S. Prince 

This tutorial discusses how bias can be introduced into the machine learning pipeline, what it means for a decision to be fair, and methods to remove bias and ensure fairness. As machine learning algorithms are increasingly used to determine important real-world outcomes such as loan approval, pay rates, and parole decisions, it is incumbent on the AI community to minimize unintentional unfairness and discrimination.

Further Reading

 

A rubix cube with various cubes sticking out. The colours are white, sky blue and navy blue.

Tutorial #2: few-shot learning and meta-learning I

Author(s): W. Zi, L. S. GhoraieS. Prince

This tutorial describes few-shot and meta-learning problems and introduces a classification of methods. We also discuss methods that use a series of training tasks to learn prior knowledge about the similarity and dissimilarity of classes that can be exploited for future few-shot tasks. 

Further Reading

 

Cubes repeating across image ranging from colours white to blue.

Tutorial #3: few-shot learning and meta-learning II

Author(s): W. Zi, L. S. GhoraieS. Prince

In part II of our tutorial on few-shot and meta-learning we discuss methods that incorporate prior knowledge about how to learn models, and that incorporate prior knowledge about the data itself. These include three distinct approaches “learning to initialize", "learning to optimize'' and "sequence methods''. 

Further Reading

 

A blue maze.

Tutorial #4: auxiliary tasks in deep reinforcement learning

Author(s): P. Hernandez-LealB. Kartal, M. E. Taylor

This tutorial focuses on the use of auxiliary tasks to improve the speed of learning in the context of deep reinforcement learning (RL). Auxiliary tasks are additional tasks that are learned simultaneously with the main RL goal and that generate a more consistent learning signal. The system uses these signals to learn a shared representation and hence speed up the progress on the main RL task. Additionally, examples from a variety of domains are explored. 

Further Reading

 

A white glowing circle on a navy blue background with a leading light tailing off in transparency.

Tutorial #5: variational autoencoders

Author(s): S. Prince 

In this tutorial we discuss latent variable models in general and then moves onto the specific case of the non-linear latent variable model. We'll see that maximum likelihood learning of this model is not straightforward, but we can define a lower bound on the likelihood. We then demonstrate how the autoencoder architecture can approximate this bound using a Monte Carlo (sampling) method. To maximize the bound, we need to compute derivatives, but unfortunately, it's not possible to compute the derivative of the sampling component. However, we'll show how to side-step this problem using the reparameterization trick. Finally, extensions of the VAE and some of its drawbacks will be explored. 

Further Reading

 

An abstract zoomed in view of a Bayesian data graph, the colours range from navy to cyan blue.

Tutorial #8: Bayesian optimization

Author(s): M. O. AhmedS. Prince

In this tutorial, we dive into Bayesian optimization, its key components, and applications. Optimization is at the heart of machine learning; Bayesian optimization specifically is a framework that can deal with many optimization problems that will be discussed. The core idea is to build a model of the entire function that we are optimizing. This model includes both our current estimate of that function and the uncertainty around that estimate. By considering this model, we can choose where next to sample the function. Then we update the model based on the observed sample. This process continues until we are sufficiently certain of where the best point on the function is.

Further Reading

Authors