AI can be at risk of adversarial attacks. Our comprehensive toolkit includes Advertorch, our well-established adversarial robustness research code, which implements a series of attack and defense strategies that can be used to protect against risks.


Bias has long existed in society and many organizations don't understand how it applies in AI. We focus on how to detect and manage bias in order to ensure a fair and ethical approach to AI.

Model Governance

Model validation is key in ensuring that algorithms are reliable and effective. Modern AI needs validation more than ever, yet this technology presents its own challenges to traditional validation techniques.


Understanding what influences the decisions of a machine learning model is a critical step in the adoption of AI. Our research provides deeper insight.

Data Privacy

Data privacy is paramount in building responsible AI. Our toolkit on synthetic data generation allows developers to gain insight without compromising integrity and data privacy.