Words are powerful. So is Machine Learning technology. And that can create risks and challenges. In this post, we talk with Nick Frosst, co-founder of Cohere, about his startup’s efforts to ensure they are developing models and products responsibly.

The views expressed in this article are those of the interviewee and do not necessarily reflect the position of RBC or Borealis AI.

Why is responsible AI important to Cohere?

Nick Frosst (NF):

Cohere offers the tools to solve any natural language processing problem that a developer might have through the use of large language models. We have an API that allows people to access, fine tune and deploy these state-of-the-art models, giving them the ability to solve pretty much any problem they can formulate.We’re doing something that is transformative; getting computers to understand language has really broad impacts. We know we need to respect the power of the technology and understand the ways in which it could be used for good and for bad.

As the builder of that tool, you want to enable the good things that can be done with it while, at the same time, make the bad things that can be done with it more difficult to do and less effective.

 

Why is ethical AI such an interesting and pernicious problem?

(NF):

There’s no silver bullet for this, even though there’s a lot of people working on it. Languages continuously change – they’re living things – so there will never be a complete lock on this.

If you take an extreme view, one way to address safety concerns would be to limit access to just the handful of companies that have the resources to create their own large language models. But we think the technology’s really good. We think it’s really transformative. And we want people to have access to it. So limiting access as a way to improve security is obviously not ideal.

The middle ground is that you make the technology as good as you can, and as ethical and responsible as you can. You then deploy it in a way that gives as many people access to it as possible, while balancing the risk and ensuring it is deployed responsibly.

How do manage risks and reduce the likelihood of your models generating hateful or harmful content?

(NF):

Let’s use hateful content as an example. Prior to deploying the model, we spend a lot of time trying to reduce the likelihood of it generating hateful or identity-based hate content.

The most straightforward way is by changing the distribution of the training data. And that can be done with some really simple techniques like word level filtration – where documents are removed from the training data if they contain a word from a pre-populated list of slurs, for example. But that obviously doesn’t catch everything.

Some techniques are much more sophisticated. For example, we recently posted a paper that described how we are using our model to self-identify words and text that should be added to the list. In other words, we are using earlier versions of our big language model to remove harmful data for the next iteration of the model.

Have you seen a trade-off between safety and performance?

(NF):

Not really. We haven’t seen a drop in performance when filtering out identity-based hate speech, for example. If we do see a drop in performance, the impact is generally on the model’s ability to generate identity-based hate speech. So it’s really a win-win.

Should tech companies and developers be responsible for building ethics and safety into their products?

(NF):

I think it’s tempting to just say that morality is subjective. To ask, “Who are we to make the decisions?” It’s easy to abdicate the decision. But I don’t agree with that at all.

I think it’s far better to recognize that it’s subjective, and then to work really hard to make the right decisions based on input from as many smart people as possible. And I think founders of startups have an even greater responsibility to ensure the technologies they are building are contributing to the good in the world. We cannot just simply abdicate that responsibility to users. 

How are you embedding responsible AI principles into your work at Cohere?

(NF): 

I’ll be the first to admit that I’m not an expert in ethics. That’s not my background. And I know that. So it’s really helpful to have a group of people who have studied that area and its intersections with technology.

We set up a Responsibility Council at Cohere. And when we’re faced with a complicated problem, we can reach out to this group of diverse people to get their input. They give us suggestions. They pay attention to how we’re doing things. And they give us advice and recommendations and tell us if we’re doing the right stuff.

I think in the technology sector, we often think most problems can be addressed by applying more tech. But the reality is that there are a whole bunch of complicated problems that can’t be addressed with pure tech solutions. These are problems that require people who have spent a lot of time thinking about a bunch of the other domains of research that are not the hard sciences.

How do you ensure your people are doing the right thing?

(NF): 

We take a holistic and distributed approach to this. Alongside our Responsibility Council, we have our own internal experts who are largely dedicated to working on responsible AI. We also want these concepts and ideas to be flowing across the organization and through the culture. So we try to distribute some of the responsibilities across the whole team, encouraging as many people as possible to work on it.

The point is to ensure the idea of responsible AI doesn’t get stuck in siloed thinking – that people are engaged on these topics as much as possible, and you are making sure it is spread out across the organization. Responsible AI can’t just exist on a slide in the organizational mission statement.

What advice would you give other AI/ML developers and founders?

(NF): 

We really need to respect the technology that we work with. Machine learning can work. It can be transformative. It can have a massive impact on people’s lives. So you need to make sure you are building something that is having a positive impact and minimizing the potential for negative impact.

At Cohere, we try to think about these issues as early as possible in the development cycle. And we are working with a bunch of really smart people to help ensure we don’t allow a blind-spot to emerge down the road.

My advice would be to get as much input from as many different people as possible. And to start thinking about it from the very start. Other than that, just try to do your best.

Image - NickFrosst.png

About Nick Frosst

Nick Frosst is the co-founder of Cohere.ai. Prior to founding Cohere, Nick worked on neural network research as part of Geoffrey Hinton’s Toronto Google Brain Team, focusing on capsule networks, adversarial examples, and explanability. Nick holds a BSc from the University of Toronto, with a double major in Computer Science and Cognitive Science. Nick co-founded Cohere in January 2019 with Aiden Gomez and Ivan Zhang.