There’s been a lot of debate about the importance of responsible AI over the past few years. And for good cause: if we are to unlock the value of AI for future generations, we must ensure that innovation and ethics go hand-in-hand. There have been too many instances where machine learning models, left unchecked, have had a discriminatory impact. The AI community has woken up to the risks and is working to earn the social licence required to take full advantage of AI’s transformative potential.  

Responsible AI is a complex, global problem. As the machine learning research institute for the Royal Bank of Canada, Borealis AI has the privilege—and responsibility—of using machine learning to support the financial well-being of Canadians. And our mission goes beyond applying machine learning to shape the future of banking: it includes a mandate to build tools that help everyone apply AI safely. 

Through our RESPECT AI initiative and our researchers’ continuous focus on balancing innovation with ethics, we spend much of our time advocating for greater debate and action on the responsible use of AI. 

We were therefore delighted when we were approached by The Montreal AI Ethics Institute to participate in their newest publication on The State of AI Ethics. The report serves as a rich compendium of valuable viewpoints and scientific research on the field of AI Ethics. 

From algorithmic injustice, privacy concerns and labour impacts, to the influence of misinformation, social media platforms and corporate interests, the publication shines a spotlight on many of the key ethical considerations in today’s AI environment. 

In my introduction to the chapter on Discrimination, I noted that the level of debate and discussion on these topics is encouraging. Yet I raised concern that the gap between talk and action is growing. Policymakers and business leaders intuitively know AI carries ethical risks; they just don’t know what they should be doing to manage and mitigate them. 

With the hope of helping catalyze positive action, I shared a few insights from our work in the field, all of which I believe are relevant to business leaders and AI developers across the country. Here are quick excerpts from those insights (the full text can be found in my introduction to the Discrimination chapter here). 

  1. Understand the context: Take great care to understand the unique contexts in which your tools and AI models will be used, and adjust accordingly. Anthropological sensitivity – a capability not often taught to AI developers – will be needed. 
  2. Enhance model validation: Model validation and governance is perhaps not the most exciting part of AI development and management, but it is likely the most important step in ensuring we’re on top of the risks of bias and discrimination. 
  3. Focus on diversity: Businesses and governments will want to redouble their efforts to encourage greater diversity within the AI community. More diverse voices at the table will help fuel the creation of technologies that work for everyone. 
  4. Get people talking: Developers, business leaders, policy-makers and – perhaps most importantly – citizens need to be more aware of the risks and positive implications of AI so they can be better prepared to ask the tough questions and manage the right risks.

We always have to step back and remember that machine learning models rely upon the assumption that the future will look like the past. For many social and policy issues, however, we explicitly want to change our practices so the future looks different from the past. We want to dream about a better future, and take action to bring it into being. Machine learning algorithms can be designed to usher the future we want—but only if we take care to design them responsibly.