AI permeates our daily lives, and ensuring it is being developed and used in a responsible and ethical way has become a top priority. RESPECT AI is a hub for the AI community and business executives looking for practical advice and solutions to enable a more responsible adoption of this technology.
Borealis AI and RBC launch RESPECT AI, bringing ethical and responsible AI to all
Explainable AI for business users: Talking to the CEO of XaiPient
In conversation with Prasad Chalasani, CEO and Co-Founder of XaiPient.
Resetting regulation: A new approach to regulating ML
In conversation with Gillian Hadfield, Director of the Schwartz Reisman Institute for Technology and Society.
What is fair? And can we measure it?
In conversation with Richard Zemel, Co-Founder and Director of Research at the Vector Institute for Artificial Intelligence; Industrial Research Chair in Machine Learning at the University of Toronto; and Senior Fellow at the Canadian Institute for Advanced Research.
Right to some Privacy
In conversation with Holly Shonaman, Chief Privacy Officer at RBC.
AI can be at risk of adversarial attacks. Our comprehensive toolkit includes Advertorch, our well-established adversarial robustness research code, which implements a series of attack and defense strategies that can be used to protect against risks.
Bias has long existed in society and many organizations don't understand how it applies in AI. We focus on how to detect and manage bias in order to ensure a fair and ethical approach to AI.
Model validation is key in ensuring that algorithms are reliable and effective. Modern AI needs validation more than ever, yet this technology presents its own challenges to traditional validation techniques.
Understanding what influences the decisions of a machine learning model is a critical step in the adoption of AI. Our research provides deeper insight.
Data privacy is paramount in building responsible AI. Our toolkit on synthetic data generation allows developers to gain insight without compromising integrity and data privacy.