Robustness

Adversarial robustness is key to developing deep learning models: our Advertorch Python toolbox contains adversarial training scripts, modules to generate adversarial perturbations, and more.

All Publications

Research

AdverTorch

This toolbox provides machine learning practitioners with the ability to generate private and synthetic data samples from real world data. It currently implements 5 state of the art generative models that can generate differentially private synthetic data.

None

Fairness

We aim to cultivate an AI ecosystem that’s free of bias, and has the potential to help move our society forward, ethically and responsibly.

Model Governance

AI safety is central to our work. We develop new validation methods and invest in research to deal with the scale, complexity, and dynamism of AI.

Data Privacy

AI poses unique data privacy challenges. We focus on differential privacy in our research, and built Private Data Generation™ toolbox to offer synthetic ML data samples.

Research

Private Synthetic Data Generation Toolbox

This toolbox provides machine learning practitioners with the ability to generate private and synthetic data samples from real world data. It currently implements 5 state of the art generative models that can generate differentially private synthetic data.

None

Explainability

Explainability is a key component to trust in machine learning. Our research helps people better understand, and trust AI algorithms.