Advancing responsible AI adoption
AI permeates our daily lives, and ensuring it is being developed and used in a responsible and ethical way has become a top priority.
RESPECT AI™ is a hub for the AI community and business executives looking for practical advice and solutions to enable a more responsible adoption of this technology.
We pulled together our open source research code, tutorials, programs, and academic research for the AI community, helping to make ethical AI available to all.
News
RESPECT AI: Making machines think more like humans with Prof. Yoshua Bengio
Borealis AI speaks with Professor Yoshua Bengio, about the link between human brains and AI, key future research areas and risks to avoid.
News
RESPECT AI: Collaborating with generative AI for the greater good with Dr. Graham Taylor
Explore the exciting world of generative AI and the impact is may have on human-machine collaboration in our interview with Dr. Graham Taylor.
News
RESPECT AI: Governance for growth with Abhishek Gupta of Montreal AI Ethics Institute
Learn how to responsibly manage and implement AI governance in your organization with guidance from Abhishek Gupta.
More
View All-
RESPECT AI: Responsible for future success with Dr. Karina Alexanyan of All Tech is Human
RESPECT AI: Responsible for future success with Dr. Karina Alexanyan of All Tech is Human
-
RESPECT AI: Building trust in an AI-enabled world with Preeti Shivpuri, Deloitte
RESPECT AI: Building trust in an AI-enabled world with Preeti Shivpuri, Deloitte
-
RESPECT AI: The evolving world of AI regulation with Carole Piovesan, INQ Law
RESPECT AI: The evolving world of AI regulation with Carole Piovesan, INQ Law
-
RESPECT AI: Improving the efficiency of Differential Privacy with Zhiqi Bu, Amazon AWS AI
RESPECT AI: Improving the efficiency of Differential Privacy with Zhiqi Bu, Amazon AWS AI
-
RESPECT AI | Resetting regulation: A new approach to regulating machine learning
RESPECT AI | Resetting regulation: A new approach to regulating machine learning
G. Hadfield.
In Numbers
92%
According to the survey, conducted on behalf of RBC by Maru/Matchbox, companies currently using AI/analytics agree it is important for businesses to implement AI in an ethical way. However, 92 per cent have concerns in dealing with the ethical challenges that AI represents, and just over half have someone responsible for ethical development of data and AI technology.
88%
The results of the survey also highlighted some significant challenges that businesses face in terms of bias such as race and gender. The vast majority (88 per cent) of companies believe they have bias within their organization, but almost half of them do not understand the challenges that bias presents in AI.
RESPECT AI™ Pillars
-
The ability of an AI system to defend against adversarial attacks. This component of RESPECT AI™ includes Advertorch, Borealis AI’s well-established adversarial robustness research code, which implements a series of attack and defense strategies that can be used to protect against risks. This tool is offered to AI researchers and scientists that aim to advance the field of robustness in machine learning.
-
Enabling fair and responsible AI starts with the ability to mitigate the challenge of bias in AI models. We have published technical tutorials on bias as well as guidance for organizations to address bias.
-
AI safety is central to our work. People need to trust that AI models are safe, compliant and robust. Ensuring accountability and reliability in AI is critical to the successful application of AI models, and putting processes in place for model validation — alongside other model governance processes ensures that algorithms do what they’re supposed to do and don’t do what they’re not supposed to do. We develop new validation methods and we invest in research to deal with the scale, complexity, and dynamism of AI.
-
AI poses unique data privacy challenges. Addressing data privacy, while leveraging large data sets is an essential component of responsible AI and is critical to responsible AI adoption. We focus on differential privacy in our research, and built Private Data Generation™ toolbox to offer synthetic ML data samples, a method that allows scientists to use large data sets without risking the exposure of personal identifiable information. This tool can be used by researchers to advance the field of AI privacy by proposing novel solutions to this critical issue. We have also published tutorials on differential privacy, among other topics.
-
Explainability is a key component to trust in machine learning. An understanding of how the ML algorithm learns and makes decisions. Our research, tutorials and interviews with practitioners in the field outline how explainability can help build trust in AI.
Open Source Tools
AdverTorch
This toolbox provides machine learning practitioners with the ability to generate private and synthetic data samples from real world data. It currently implements 5 state of the art generative models that can generate differentially private synthetic data.
GitHubLiteTracer
LiteTracer acts as a drop-in replacement for argparse, and it can generate unique identifiers for experiments in addition to what argparse already does. Along with a reverse lookup tool, LiteTracer can trace-back the state of a project that generated any result tagged by the identifier.
GitHubPrivate Synthetic Data Generation
This toolbox provides machine learning practitioners with the ability to generate private and synthetic data samples from real world data.It currently implements 5 state of the art generative models that can generate differentially private synthetic data.
GitHub