Authors: H. Shonaman
None

In this post, we explore the concept of ‘data privacy’ with Holly Shonaman, Chief Privacy Officer at RBC.  

The views expressed in this article are those of the interviewee and do not necessarily reflect the position of RBC or Borealis AI.

 


 


Data privacy is a hot topic for most organizations. But operating in a highly regulated industry must place privacy even higher on RBC’s agenda. Is privacy driven by regulation?

Holly Shonaman (HS): 

Not at all. I agree that we are highly regulated and that privacy is central to those regulations. But I think most data-rich businesses now understand that privacy protection is about much more than simply meeting a regulatory hurdle. 

Like many organizations, we need data from our customers in order to run our business. We need it to ensure our products and services are meaningful and valuable to them. If our customers don’t trust us with their data, it becomes very difficult for us to do our jobs and deliver value. 

So, yes, we are always mindful of the regulatory aspects. But that’s not what guides us: our focus is on building trust with our customers, and privacy is central to that. 

 

Has AI changed the risk definitions around privacy?

(HS): 

My job is to consider how we are using and processing data in all aspects of the business. And in that respect, AI isn’t all that different from more conventional methods of data analytics. 

However, there are clear nuances that surround the use of AI, particularly in a consumer setting. In part, it’s the scale and speed that AI can achieve. That makes privacy and reputational risks more difficult to assess and control. 

But it’s also that the public conversation around AI remains mired in mistrust. People don’t trust that the data is accurate; they don’t trust it is free from bias; they don’t trust how their data is going to be used. They simply don’t believe that machine learning can replace human interactions

 

What will it take to help Canadians overcome that mistrust in AI?

(HS): 

It comes down to literacy on the topic. I don’t believe people really know what AI is and what protections are around it. I would argue that, as a country, we need to have a much more robust conversation about AI and help Canadians understand what kinds of questions they should be asking. That will require some thinking at a national policy level. But it’s important that – like financial literacy and data literacy – Canadians gain some AI literacy as well. 

 

What role should corporate Privacy Officers play in managing the risk of AI? 

(HS): 

At RBC, data privacy is baked into our processes. Our role in the global Privacy Office is to ensure that AI developers and business leaders understand and assess privacy risks. For example, before launching any new product or initiative, we conduct a privacy risk impact assessment which looks at the entire end-to-end process. If a risk is identified,  we have a conversation about the types of controls that should be put in place. 

Sometimes that means applying differential privacy techniques or limiting the amount of information that goes into the model. Or it could require further testing for things like the right level of data granularity, to ensure anonymous people in the data set cannot be identified based on the outcomes. 

 

When you look at the evolution of AI in the Canadian marketplace, what risks are you worried about in the long-term?

(HS): 

I can’t overstate the trust aspect. One of my concerns is that if we overuse AI without understanding the full short and long-term consequences of models, we could end up destroying any trust we build in the technology as a society. 

The problem is that society is changing extraordinarily rapidly and that means the AI community can’t always assess the full impact of their models until the risks are all too apparent. Being able to stay on top of these shifts is our focus. 

 

Are businesses taking the risk seriously? 

(HS):

I am very encouraged by the robustness of the way many management teams – including those at RBC – are approaching this issue. We have a very strong risk management team. And our board of directors and executives demand clarity on what we are doing to treat clients fairly and use their data appropriately. 

Generally speaking, I think everyone is very happy to do things with more speed, better information and more efficiency. But they also recognize that if you have a fast car, you need strong brakes. In other words, companies need to have the ability to continuously assess these models and take them ‘offline’ if there is a problem.

 

What can the AI community and developers do in order to better manage privacy risk? 

(HS): 

I would argue that it needs to start at the university and training level – we need to educate developers on ethical AI from the outset. It can’t just be all about code; developers need to understand the social, ethical and privacy issues that influence their field. 

I also think bias and risk should always be top of mind. They need to try to think broadly about a range of potential short, medium and long-term scenarios and test against them. That’s not easy; it’s hard work to look into the future. 

I would also encourage AI developers to be more front-and-centre, working with the business and the privacy team to talk about what they are doing, the problems they have identified, their data sources and their designs.

 

Can business leaders be doing more? 

(HS): 

Business leaders need to keep doing more of what they are already doing. They need to demand more transparency, more reporting and testing. Perhaps more importantly, leaders need to allow employees to find flaws in their models, and maybe even reward that. 

I think we are also going to see a lot more focus on third-party verification and audits to ensure corporate models and controls are really up to the task. It’s good protection for the business and helps the organization understand the robustness of their own testing. 

 

Do you think privacy concerns will hold back AI development and adoption in Canada? 

(HS): 

Quite to the contrary. I actually believe that – if we get it right – privacy is the key to building trust in AI. It doesn’t matter if you are lending money or selling sweatpants; access to customer data is critical to being able to deepen your relationship with your customers, deliver a great experience to them, and serve them. If you don’t use their data respectfully to support the client relationship, they’ll lose interest in your business. If you breach their privacy or cross the ethical line, you lose their trust. So our focus and attention to privacy controls is actually what will allow us to move ahead with AI development in Canada. 

  


 

About Holly Shonaman

As RBC’s Chief Privacy Officer, Holly Shonaman leads RBC’s global Privacy Risk Management program and provides compliance oversight in support of the bank’s leadership digitally-enabled relationship banking. Ms. Shonaman has held various positions within RBC across the retail and commercial banking, and wealth management divisions. 

Authors