Advancing responsible AI adoption
RESPECT AI: Making machines think more like humans with Prof. Yoshua Bengio
Borealis AI speaks with Professor Yoshua Bengio, about the link between human brains and AI, key future research areas and risks to avoid.
RESPECT AI: Mozilla looks to open-source solutions to responsible AI
In this article, we talk with Mark Surman, President and Executive Director of Mozilla Foundation, about his organization’s focus and the importance of using open source approaches.
RESPECT AI: Decarbonizing AI with Dr. Sasha Luccioni, Climate Lead and AI Researcher of Hugging Face
we talk with Dr. Sasha Luccioni, Climate Lead and AI Researcher at Hugging Face, about the link between AI and the environment and explore opportunities for the AI community to make a positive impact.
RESPECT AI: Collaborating with generative AI for the greater good with Dr. Graham Taylor
RESPECT AI: Building AI ethics into the business with Giovanni Leoni of Credo AI
RESPECT AI: Governance for growth with Abhishek Gupta of Montreal AI Ethics Institute
RESPECT AI: Responsible for future success with Dr. Karina Alexanyan of All Tech is Human
RESPECT AI: Building trust in an AI-enabled world with Preeti Shivpuri, Deloitte
RESPECT AI: The evolving world of AI regulation with Carole Piovesan, INQ Law
RESPECT AI: Improving the efficiency of Differential Privacy with Zhiqi Bu, Amazon AWS AI
RESPECT AI | Resetting regulation: A new approach to regulating machine learning
According to the survey, conducted on behalf of RBC by Maru/Matchbox, companies currently using AI/analytics agree it is important for businesses to implement AI in an ethical way. However, 92 per cent have concerns in dealing with the ethical challenges that AI represents, and just over half have someone responsible for ethical development of data and AI technology.
The results of the survey also highlighted some significant challenges that businesses face in terms of bias such as race and gender. The vast majority (88 per cent) of companies believe they have bias within their organization, but almost half of them do not understand the challenges that bias presents in AI.
“Responsible and safe AI is critical to maintaining trust and accountability, but as this new survey shows, many companies and developers do not have the resources to implement AI safely and ethically. RESPECT AI™ will help enable secure, fair, ethical and trusted AI products and a more responsible adoption of AI technology across industries.”
Dr. Foteini Agrafioti
Head of Borealis AI and RBC’s Chief Science Officer
“At RBC, we see a world where every client interaction and business decision is informed by AI. Because our relationship with our clients is built on a foundation of trust, practicing ethical and responsible AI is not an option – it’s the only way we do business. RESPECT AI™ is proof of our commitment to building a healthy technology ecosystem within and beyond financial services.”
Group Head, Technology & Operations, RBC
RESPECT AI™ Pillars
The ability of an AI system to defend against adversarial attacks. This component of RESPECT AI™ includes Advertorch, Borealis AI’s well-established adversarial robustness research code, which implements a series of attack and defense strategies that can be used to protect against risks. This tool is offered to AI researchers and scientists that aim to advance the field of robustness in machine learning.
Enabling fair and responsible AI starts with the ability to mitigate the challenge of bias in AI models. We have published technical tutorials on bias as well as guidance for organizations to address bias.
AI safety is central to our work. People need to trust that AI models are safe, compliant and robust. Ensuring accountability and reliability in AI is critical to the successful application of AI models, and putting processes in place for model validation — alongside other model governance processes ensures that algorithms do what they’re supposed to do and don’t do what they’re not supposed to do. We develop new validation methods and we invest in research to deal with the scale, complexity, and dynamism of AI.
AI poses unique data privacy challenges. Addressing data privacy, while leveraging large data sets is an essential component of responsible AI and is critical to responsible AI adoption. We focus on differential privacy in our research, and built Private Data Generation™ toolbox to offer synthetic ML data samples, a method that allows scientists to use large data sets without risking the exposure of personal identifiable information. This tool can be used by researchers to advance the field of AI privacy by proposing novel solutions to this critical issue. We have also published tutorials on differential privacy, among other topics.
Explainability is a key component to trust in machine learning. An understanding of how the ML algorithm learns and makes decisions. Our research, tutorials and interviews with practitioners in the field outline how explainability can help build trust in AI.
Tutorials & ResearchView All
Open Source Tools
This toolbox provides machine learning practitioners with the ability to generate private and synthetic data samples from real world data. It currently implements 5 state of the art generative models that can generate differentially private synthetic data.GitHub
LiteTracer acts as a drop-in replacement for argparse, and it can generate unique identifiers for experiments in addition to what argparse already does. Along with a reverse lookup tool, LiteTracer can trace-back the state of a project that generated any result tagged by the identifier.GitHub
Private Synthetic Data Generation
This toolbox provides machine learning practitioners with the ability to generate private and synthetic data samples from real world data.It currently implements 5 state of the art generative models that can generate differentially private synthetic data.GitHub