loading page
Authors: F. Agrafioti
two overlapping hands make the respect ai icon with decorative triangular grid with connecting lines in the background

With the upward trend of investments in AI technologies, there is enormous potential for businesses racing to bring cutting-edge AI to market. From increasing diagnostic accuracy in healthcare, predicting supply chain demand in retail and enhancing online customer service in banking, companies from all sectors are using AI to improve customer experience, reduce costs and strengthen operations.

However, the pace of this change has brought with it some tough challenges, with recent failures in AI systems leading to mistrust and fear of the technology. In some instances, even among some of the world’s leading technology companies, it has led to a costly removal of AI products from the market. Many businesses are realizing that they need to slow down and invest in more responsible AI product development.

Building AI responsibly comes with numerous tradeoffs. A recent Borealis AI/RBC* survey found that while 77% of those currently using AI believe it is important for businesses to implement it in an ethical way, 93% say they experience barriers such as cost and lack of understanding when attempting to do so.
 


In putting issues such as fairness, stability, bias and explainability at the top of their agenda, business leaders are investing in a trusted partnership with their clients at the expense of speed to market. Doing the right thing comes at a cost; and in unregulated environments, businesses could be free to take risks that compromise society. 

This is why I believe it is so important that the public, businesses and governments are educated about the risks involved in AI technologies and that product owners are held to account for ethical and transparent deployment of these technologies.

Taking bias out of the equation

One particular area of concern to me is bias. I’ve seen too many examples of companies perpetuating racial or gender discrimination through poorly executed technologies such as facial recognition, and violating human rights through biased algorithms. In fact, our survey found that 88% of companies believe bias exists in their organization, but almost half (44 per cent) do not understand the challenges that bias presents in AI. The most important thing to understand is that this technology is not neutral, and that we are responsible for removing bias at every step.
 


Companies should review every level of AI development to ensure that any potential bias has been addressed. The different levels could include:

  • Data level: The data that serves as input to AI models for training may be collected in a way that under-represents certain groups. This is often the problem with face recognition systems which are sometimes designed, and thus better able to serve, individuals who fit within the races the models were trained on, though this problem is pervasive and not constrained to face recognition only.
  • Model level: Bias can be introduced at any time during the development of an AI model through architecture decisions made by engineers. It is important to note that these biases may still be unintentional, yet the impact to specific groups is the same. For instance, a model can be tuned to be more receptive to English accents to the detriment of other languages.
  • Application level: Even when a completely unbiased model can be engineered, there is still risk in how this AI is applied in the real world. The ethical considerations of the product owners together with (or lack of) regulation or internal controls can play a major role in tipping the scale.

Why we are counting on responsible AI

While AI is finding applications across different sectors, each industry is unique and AI’s impact on people’s lives and freedoms can vary widely.

As part of the Royal Bank of Canada (RBC), Borealis AI’s mandate is to advance the field of machine learning by bringing products to life for the financial services industry. Banking is a fundamental aspect of our society and one that plays a major role in helping people achieve financial health and stability. The economic prosperity of our communities is partially the responsibility of this sector. As such, any technological misstep may mean that people don’t reach their full potential - in starting a business, sending children to university, or building a house, for instance. Banks have a contract with society that requires them to be a fair and vested partner in its success. 

Borealis AI has the privilege and responsibility of building products that touch the lives of millions of clients. As part of RBC, we are driven by the mission to help our clients thrive and communities prosper, and when it comes to AI this means putting human integrity first.

Over the years, we have developed research practices that ensure that AI is developed responsibly and are supported by RBC’s data and model governance rules. Whether we work with our regulators to understand risks, or we scrutinize our own AI systems with thorough validation, building things the right way means that we routinely trade off speed for considerate and equitable innovation.  

It is also our belief that knowledge and opportunity should be shared and, for this reason, we have made the decision to contribute our research, publications and scientific code in this area to the community, as well as share RBC’s approach and expertise in governing and securing AI models which has evolved over decades of practice. Under the RESPECT AI program we are also convening a number of industry and academic leaders who are contributing their experience and offer practical advice on how to approach building AI responsibly. 

At a time where technology evolves fast and puts pressure on the ability to govern and secure, it is imperative that we slow our pace down and come together in order to develop robust solutions to the new challenges we are presented with. We hope that RESPECT AI is a step in this direction and that this series opens up some honest dialogue, exchange and sharing of our collective experiences in building AI responsibly.

*Data were collected as part of Maru BizPulse program, operated by Maru/Reports and Maru/Matchbox, which collects and tracks key metrics describing how Canadian businesses are feeling, thinking and behaving. The survey audience was made up of owners and senior decision-makers with Canadian businesses, with a particular focus on small and mid-sized businesses. The survey was fielded in September 2020. All sample was sourced through the Maru/Blue proprietary business panel and partners. A total of 622 responses were collected for this portion of the survey. For more information please visit www.marureports.com.

Authors

Related Content