In this post, we explore the need for more holistic AI risk assessments with Carole Piovesan, Partner and Co-Founder of INQ Data Law.  

The views expressed in this article are those of the interviewee and do not necessarily reflect the position of RBC or Borealis AI.

We have seen an increasing amount of debate about the risks of AI in society. Do you think business decision-makers understand the real risks associated with the technology?

Carole Piovesan (CP)

They certainly know they need to be thinking about it. But I think it’s one of those evolving challenges that executives really have trouble getting their heads around. They intuitively know that AI must be implemented ethically. But don’t know exactly what that means. They are not sure how to affect responsible innovation. It’s not an easy issue to deal with and, unfortunately, only a handful of the leaders are having that conversation right now. Yet more should be. 

Is this uncertainty around risk slowing innovation for businesses?

CP:

I think that – all too often – decision-makers are looking at one side of the coin without understanding the other. The problem is that, if you are only looking at the risks without also assessing the benefits you expect to achieve, you probably aren’t going to do anything. As with any informed decision-making, you need to be seeing the full picture. If you’re not quantifying and reporting the benefits alongside the risks, you will probably end up stifling innovation. Of course, there are also those that only focus on the benefits and that, too, can be a significant problem. 

What types of risks should decision-makers be looking for when it comes to AI?

CP: 

It’s always useful to think about risk in terms of categories and sources. With AI, the general categories of risk can span a range of areas including brand, reputation, regulatory compliance and other legal risks. The sources of that risk are more nuanced. Those risks could be in the data – whether the data is complete or robust enough; whether you have authority to use it; questions around ownership and so on – or they could be in the system itself including its governance and controls. One of the big challenges, therefore, is in identifying all the potential and foreseeable consequences of those risks and their sources.

Are Canadian and global regulators looking at this topic?

CP:

Absolutely. The recently proposed Bill C-11 (known as the Digital Charter Implementation Act) will strongly influence the way Canadian companies manage and protect data – both customer data and organizational data. While the Act has been positioned as an update to the Personal Information Protection and Electronic Documents Act (PIPEDA), it goes much further to also include rules around ‘automated decision systems’ that will be very relevant to Canadian businesses using AI. 

Globally, we are also seeing a great debate emerge around this topic. The EU has convened a high-level expert group on AI to look at this. In the US, the White House released a draft memo on a risk-based assessment of AI. The Global Partnership on AI has also been driving the discussion. And while the world has yet to achieve consensus on clear global guidelines, it is encouraging to see a healthy discussion around how to conduct valuable risk assessments of AI. 

Are regulators driving the agenda then?

CP:

The regulatory agenda is certainly moving ahead. But you can’t just sit around and wait for regulation to be promulgated and interpreted on this topic. As recent media reports clearly illustrate, the public already has views on what is ethical behaviour and what is not. 

That means that decision-makers need to be thinking about their governance right now. They need to be working out how they plan to ensure oversight across the lifecycle of the system, allowing for periodic documentation to demonstrate you are being diligent.

That’s not a guarantee that nothing will go wrong. But if something does go wrong, it will put you in a much more defensible position legally and reputationally than if you had just ignored the risks and blamed a lack of regulation. 

What is hampering efforts to conduct robust AI risk assessments in Canadian businesses today? 

CP: 

The most common barrier I see is a lack of coordination across functions. For instance, I think the legal teams and the technology and innovation teams need to start working much more collaboratively and at a much earlier stage in the development process – ideally at the ideation phase. That will allow both parties to work with the business to convert ideas into systems that deliver on business objectives while remaining aligned to the overall values of the organization. 

The next challenge, however, is ensuring that everyone is talking the same language and understanding the same risks in the same way. Terms like ‘explainability’ mean different things to lawyers, business leaders, developers and customers. Having everyone on the same page from the beginning is critical to ensuring risk assessments are robust and holistic. 

What advice are you giving your clients about managing the risk of emerging technologies like AI?

CP:

My clients are keen to experiment with emerging technologies and they are not willing to wait for regulation to arrive. So they are being very diligent about how they prepare and manage their risk assessments. 

At the same time, they also recognize that this is about more than just privacy. And we are working with them to create a much broader approach to risk governance and assessment that is supported by an integrated team and integrated governance. At each step, we help them think through that risk versus benefit analysis that recognizes the unique context of each system. 

My advice to clients is to go sit down with their innovation teams and decide what the organization is going to look like in five years. Then we work back from there to understand the risks and priorities going forward. If you are only thinking about where you are today, you’ll never be building for the future. 

Image - Carole Piovesan.png

About Carole Piovesan

Carole is a partner and co-founder of INQ Data Law where she concentrates on AI, privacy, cyber readiness and data governance. As well as advising some of Canada’s leading companies on technology-related issues, Carole is the co-chair of the Exposure Notification application on behalf of the federal government; the co-chair of the data governance working group for the Data Governance Standardization Collaborative at the Standards Council of Canada; a member of the Data Governance Working Group for the Global Partnership on AI; and an advisor to the Law Commission of Ontario’s working group on AI in administrative decisions.